Jan 23 10:44:40 localhost kernel: Linux version 5.14.0-661.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-69.el9) #1 SMP PREEMPT_DYNAMIC Fri Jan 16 09:19:22 UTC 2026
Jan 23 10:44:40 localhost kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Jan 23 10:44:40 localhost kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64 root=UUID=22ac9141-3960-4912-b20e-19fc8a328d40 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 23 10:44:40 localhost kernel: BIOS-provided physical RAM map:
Jan 23 10:44:40 localhost kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Jan 23 10:44:40 localhost kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Jan 23 10:44:40 localhost kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Jan 23 10:44:40 localhost kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Jan 23 10:44:40 localhost kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Jan 23 10:44:40 localhost kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Jan 23 10:44:40 localhost kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Jan 23 10:44:40 localhost kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Jan 23 10:44:40 localhost kernel: NX (Execute Disable) protection: active
Jan 23 10:44:40 localhost kernel: APIC: Static calls initialized
Jan 23 10:44:40 localhost kernel: SMBIOS 2.8 present.
Jan 23 10:44:40 localhost kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Jan 23 10:44:40 localhost kernel: Hypervisor detected: KVM
Jan 23 10:44:40 localhost kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Jan 23 10:44:40 localhost kernel: kvm-clock: using sched offset of 3620318961 cycles
Jan 23 10:44:40 localhost kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Jan 23 10:44:40 localhost kernel: tsc: Detected 2800.000 MHz processor
Jan 23 10:44:40 localhost kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Jan 23 10:44:40 localhost kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Jan 23 10:44:40 localhost kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Jan 23 10:44:40 localhost kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Jan 23 10:44:40 localhost kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Jan 23 10:44:40 localhost kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Jan 23 10:44:40 localhost kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Jan 23 10:44:40 localhost kernel: Using GB pages for direct mapping
Jan 23 10:44:40 localhost kernel: RAMDISK: [mem 0x2d426000-0x32a0afff]
Jan 23 10:44:40 localhost kernel: ACPI: Early table checksum verification disabled
Jan 23 10:44:40 localhost kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Jan 23 10:44:40 localhost kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 23 10:44:40 localhost kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 23 10:44:40 localhost kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 23 10:44:40 localhost kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Jan 23 10:44:40 localhost kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 23 10:44:40 localhost kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 23 10:44:40 localhost kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Jan 23 10:44:40 localhost kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Jan 23 10:44:40 localhost kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Jan 23 10:44:40 localhost kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Jan 23 10:44:40 localhost kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Jan 23 10:44:40 localhost kernel: No NUMA configuration found
Jan 23 10:44:40 localhost kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Jan 23 10:44:40 localhost kernel: NODE_DATA(0) allocated [mem 0x23ffd5000-0x23fffffff]
Jan 23 10:44:40 localhost kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Jan 23 10:44:40 localhost kernel: Zone ranges:
Jan 23 10:44:40 localhost kernel:   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Jan 23 10:44:40 localhost kernel:   DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Jan 23 10:44:40 localhost kernel:   Normal   [mem 0x0000000100000000-0x000000023fffffff]
Jan 23 10:44:40 localhost kernel:   Device   empty
Jan 23 10:44:40 localhost kernel: Movable zone start for each node
Jan 23 10:44:40 localhost kernel: Early memory node ranges
Jan 23 10:44:40 localhost kernel:   node   0: [mem 0x0000000000001000-0x000000000009efff]
Jan 23 10:44:40 localhost kernel:   node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Jan 23 10:44:40 localhost kernel:   node   0: [mem 0x0000000100000000-0x000000023fffffff]
Jan 23 10:44:40 localhost kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Jan 23 10:44:40 localhost kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Jan 23 10:44:40 localhost kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Jan 23 10:44:40 localhost kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Jan 23 10:44:40 localhost kernel: ACPI: PM-Timer IO Port: 0x608
Jan 23 10:44:40 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Jan 23 10:44:40 localhost kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Jan 23 10:44:40 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Jan 23 10:44:40 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Jan 23 10:44:40 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Jan 23 10:44:40 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Jan 23 10:44:40 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Jan 23 10:44:40 localhost kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Jan 23 10:44:40 localhost kernel: TSC deadline timer available
Jan 23 10:44:40 localhost kernel: CPU topo: Max. logical packages:   8
Jan 23 10:44:40 localhost kernel: CPU topo: Max. logical dies:       8
Jan 23 10:44:40 localhost kernel: CPU topo: Max. dies per package:   1
Jan 23 10:44:40 localhost kernel: CPU topo: Max. threads per core:   1
Jan 23 10:44:40 localhost kernel: CPU topo: Num. cores per package:     1
Jan 23 10:44:40 localhost kernel: CPU topo: Num. threads per package:   1
Jan 23 10:44:40 localhost kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Jan 23 10:44:40 localhost kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Jan 23 10:44:40 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Jan 23 10:44:40 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Jan 23 10:44:40 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Jan 23 10:44:40 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Jan 23 10:44:40 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Jan 23 10:44:40 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Jan 23 10:44:40 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Jan 23 10:44:40 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Jan 23 10:44:40 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Jan 23 10:44:40 localhost kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Jan 23 10:44:40 localhost kernel: Booting paravirtualized kernel on KVM
Jan 23 10:44:40 localhost kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Jan 23 10:44:40 localhost kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Jan 23 10:44:40 localhost kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Jan 23 10:44:40 localhost kernel: pcpu-alloc: s225280 r8192 d28672 u262144 alloc=1*2097152
Jan 23 10:44:40 localhost kernel: pcpu-alloc: [0] 0 1 2 3 4 5 6 7 
Jan 23 10:44:40 localhost kernel: kvm-guest: PV spinlocks disabled, no host support
Jan 23 10:44:40 localhost kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64 root=UUID=22ac9141-3960-4912-b20e-19fc8a328d40 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 23 10:44:40 localhost kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64", will be passed to user space.
Jan 23 10:44:40 localhost kernel: random: crng init done
Jan 23 10:44:40 localhost kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Jan 23 10:44:40 localhost kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Jan 23 10:44:40 localhost kernel: Fallback order for Node 0: 0 
Jan 23 10:44:40 localhost kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Jan 23 10:44:40 localhost kernel: Policy zone: Normal
Jan 23 10:44:40 localhost kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Jan 23 10:44:40 localhost kernel: software IO TLB: area num 8.
Jan 23 10:44:40 localhost kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Jan 23 10:44:40 localhost kernel: ftrace: allocating 49417 entries in 194 pages
Jan 23 10:44:40 localhost kernel: ftrace: allocated 194 pages with 3 groups
Jan 23 10:44:40 localhost kernel: Dynamic Preempt: voluntary
Jan 23 10:44:40 localhost kernel: rcu: Preemptible hierarchical RCU implementation.
Jan 23 10:44:40 localhost kernel: rcu:         RCU event tracing is enabled.
Jan 23 10:44:40 localhost kernel: rcu:         RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Jan 23 10:44:40 localhost kernel:         Trampoline variant of Tasks RCU enabled.
Jan 23 10:44:40 localhost kernel:         Rude variant of Tasks RCU enabled.
Jan 23 10:44:40 localhost kernel:         Tracing variant of Tasks RCU enabled.
Jan 23 10:44:40 localhost kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Jan 23 10:44:40 localhost kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Jan 23 10:44:40 localhost kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Jan 23 10:44:40 localhost kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Jan 23 10:44:40 localhost kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Jan 23 10:44:40 localhost kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Jan 23 10:44:40 localhost kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Jan 23 10:44:40 localhost kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Jan 23 10:44:40 localhost kernel: Console: colour VGA+ 80x25
Jan 23 10:44:40 localhost kernel: printk: console [ttyS0] enabled
Jan 23 10:44:40 localhost kernel: ACPI: Core revision 20230331
Jan 23 10:44:40 localhost kernel: APIC: Switch to symmetric I/O mode setup
Jan 23 10:44:40 localhost kernel: x2apic enabled
Jan 23 10:44:40 localhost kernel: APIC: Switched APIC routing to: physical x2apic
Jan 23 10:44:40 localhost kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Jan 23 10:44:40 localhost kernel: Calibrating delay loop (skipped) preset value.. 5600.00 BogoMIPS (lpj=2800000)
Jan 23 10:44:40 localhost kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Jan 23 10:44:40 localhost kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Jan 23 10:44:40 localhost kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Jan 23 10:44:40 localhost kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Jan 23 10:44:40 localhost kernel: Spectre V2 : Mitigation: Retpolines
Jan 23 10:44:40 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Jan 23 10:44:40 localhost kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Jan 23 10:44:40 localhost kernel: RETBleed: Mitigation: untrained return thunk
Jan 23 10:44:40 localhost kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Jan 23 10:44:40 localhost kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Jan 23 10:44:40 localhost kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Jan 23 10:44:40 localhost kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Jan 23 10:44:40 localhost kernel: x86/bugs: return thunk changed
Jan 23 10:44:40 localhost kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Jan 23 10:44:40 localhost kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Jan 23 10:44:40 localhost kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Jan 23 10:44:40 localhost kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Jan 23 10:44:40 localhost kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Jan 23 10:44:40 localhost kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Jan 23 10:44:40 localhost kernel: Freeing SMP alternatives memory: 40K
Jan 23 10:44:40 localhost kernel: pid_max: default: 32768 minimum: 301
Jan 23 10:44:40 localhost kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Jan 23 10:44:40 localhost kernel: landlock: Up and running.
Jan 23 10:44:40 localhost kernel: Yama: becoming mindful.
Jan 23 10:44:40 localhost kernel: SELinux:  Initializing.
Jan 23 10:44:40 localhost kernel: LSM support for eBPF active
Jan 23 10:44:40 localhost kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Jan 23 10:44:40 localhost kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Jan 23 10:44:40 localhost kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Jan 23 10:44:40 localhost kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Jan 23 10:44:40 localhost kernel: ... version:                0
Jan 23 10:44:40 localhost kernel: ... bit width:              48
Jan 23 10:44:40 localhost kernel: ... generic registers:      6
Jan 23 10:44:40 localhost kernel: ... value mask:             0000ffffffffffff
Jan 23 10:44:40 localhost kernel: ... max period:             00007fffffffffff
Jan 23 10:44:40 localhost kernel: ... fixed-purpose events:   0
Jan 23 10:44:40 localhost kernel: ... event mask:             000000000000003f
Jan 23 10:44:40 localhost kernel: signal: max sigframe size: 1776
Jan 23 10:44:40 localhost kernel: rcu: Hierarchical SRCU implementation.
Jan 23 10:44:40 localhost kernel: rcu:         Max phase no-delay instances is 400.
Jan 23 10:44:40 localhost kernel: smp: Bringing up secondary CPUs ...
Jan 23 10:44:40 localhost kernel: smpboot: x86: Booting SMP configuration:
Jan 23 10:44:40 localhost kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Jan 23 10:44:40 localhost kernel: smp: Brought up 1 node, 8 CPUs
Jan 23 10:44:40 localhost kernel: smpboot: Total of 8 processors activated (44800.00 BogoMIPS)
Jan 23 10:44:40 localhost kernel: node 0 deferred pages initialised in 21ms
Jan 23 10:44:40 localhost kernel: Memory: 7763820K/8388068K available (16384K kernel code, 5797K rwdata, 13916K rodata, 4200K init, 7192K bss, 618356K reserved, 0K cma-reserved)
Jan 23 10:44:40 localhost kernel: devtmpfs: initialized
Jan 23 10:44:40 localhost kernel: x86/mm: Memory block size: 128MB
Jan 23 10:44:40 localhost kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Jan 23 10:44:40 localhost kernel: futex hash table entries: 2048 (131072 bytes on 1 NUMA nodes, total 128 KiB, linear).
Jan 23 10:44:40 localhost kernel: pinctrl core: initialized pinctrl subsystem
Jan 23 10:44:40 localhost kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Jan 23 10:44:40 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Jan 23 10:44:40 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Jan 23 10:44:40 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Jan 23 10:44:40 localhost kernel: audit: initializing netlink subsys (disabled)
Jan 23 10:44:40 localhost kernel: audit: type=2000 audit(1769165078.980:1): state=initialized audit_enabled=0 res=1
Jan 23 10:44:40 localhost kernel: thermal_sys: Registered thermal governor 'fair_share'
Jan 23 10:44:40 localhost kernel: thermal_sys: Registered thermal governor 'step_wise'
Jan 23 10:44:40 localhost kernel: thermal_sys: Registered thermal governor 'user_space'
Jan 23 10:44:40 localhost kernel: cpuidle: using governor menu
Jan 23 10:44:40 localhost kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Jan 23 10:44:40 localhost kernel: PCI: Using configuration type 1 for base access
Jan 23 10:44:40 localhost kernel: PCI: Using configuration type 1 for extended access
Jan 23 10:44:40 localhost kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Jan 23 10:44:40 localhost kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Jan 23 10:44:40 localhost kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Jan 23 10:44:40 localhost kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Jan 23 10:44:40 localhost kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Jan 23 10:44:40 localhost kernel: Demotion targets for Node 0: null
Jan 23 10:44:40 localhost kernel: cryptd: max_cpu_qlen set to 1000
Jan 23 10:44:40 localhost kernel: ACPI: Added _OSI(Module Device)
Jan 23 10:44:40 localhost kernel: ACPI: Added _OSI(Processor Device)
Jan 23 10:44:40 localhost kernel: ACPI: Added _OSI(Processor Aggregator Device)
Jan 23 10:44:40 localhost kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Jan 23 10:44:40 localhost kernel: ACPI: Interpreter enabled
Jan 23 10:44:40 localhost kernel: ACPI: PM: (supports S0 S3 S4 S5)
Jan 23 10:44:40 localhost kernel: ACPI: Using IOAPIC for interrupt routing
Jan 23 10:44:40 localhost kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Jan 23 10:44:40 localhost kernel: PCI: Using E820 reservations for host bridge windows
Jan 23 10:44:40 localhost kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Jan 23 10:44:40 localhost kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Jan 23 10:44:40 localhost kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Jan 23 10:44:40 localhost kernel: acpiphp: Slot [3] registered
Jan 23 10:44:40 localhost kernel: acpiphp: Slot [4] registered
Jan 23 10:44:40 localhost kernel: acpiphp: Slot [5] registered
Jan 23 10:44:40 localhost kernel: acpiphp: Slot [6] registered
Jan 23 10:44:40 localhost kernel: acpiphp: Slot [7] registered
Jan 23 10:44:40 localhost kernel: acpiphp: Slot [8] registered
Jan 23 10:44:40 localhost kernel: acpiphp: Slot [9] registered
Jan 23 10:44:40 localhost kernel: acpiphp: Slot [10] registered
Jan 23 10:44:40 localhost kernel: acpiphp: Slot [11] registered
Jan 23 10:44:40 localhost kernel: acpiphp: Slot [12] registered
Jan 23 10:44:40 localhost kernel: acpiphp: Slot [13] registered
Jan 23 10:44:40 localhost kernel: acpiphp: Slot [14] registered
Jan 23 10:44:40 localhost kernel: acpiphp: Slot [15] registered
Jan 23 10:44:40 localhost kernel: acpiphp: Slot [16] registered
Jan 23 10:44:40 localhost kernel: acpiphp: Slot [17] registered
Jan 23 10:44:40 localhost kernel: acpiphp: Slot [18] registered
Jan 23 10:44:40 localhost kernel: acpiphp: Slot [19] registered
Jan 23 10:44:40 localhost kernel: acpiphp: Slot [20] registered
Jan 23 10:44:40 localhost kernel: acpiphp: Slot [21] registered
Jan 23 10:44:40 localhost kernel: acpiphp: Slot [22] registered
Jan 23 10:44:40 localhost kernel: acpiphp: Slot [23] registered
Jan 23 10:44:40 localhost kernel: acpiphp: Slot [24] registered
Jan 23 10:44:40 localhost kernel: acpiphp: Slot [25] registered
Jan 23 10:44:40 localhost kernel: acpiphp: Slot [26] registered
Jan 23 10:44:40 localhost kernel: acpiphp: Slot [27] registered
Jan 23 10:44:40 localhost kernel: acpiphp: Slot [28] registered
Jan 23 10:44:40 localhost kernel: acpiphp: Slot [29] registered
Jan 23 10:44:40 localhost kernel: acpiphp: Slot [30] registered
Jan 23 10:44:40 localhost kernel: acpiphp: Slot [31] registered
Jan 23 10:44:40 localhost kernel: PCI host bridge to bus 0000:00
Jan 23 10:44:40 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Jan 23 10:44:40 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Jan 23 10:44:40 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Jan 23 10:44:40 localhost kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Jan 23 10:44:40 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Jan 23 10:44:40 localhost kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Jan 23 10:44:40 localhost kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Jan 23 10:44:40 localhost kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Jan 23 10:44:40 localhost kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Jan 23 10:44:40 localhost kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Jan 23 10:44:40 localhost kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Jan 23 10:44:40 localhost kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Jan 23 10:44:40 localhost kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Jan 23 10:44:40 localhost kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Jan 23 10:44:40 localhost kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Jan 23 10:44:40 localhost kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Jan 23 10:44:40 localhost kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Jan 23 10:44:40 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Jan 23 10:44:40 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Jan 23 10:44:40 localhost kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Jan 23 10:44:40 localhost kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Jan 23 10:44:40 localhost kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Jan 23 10:44:40 localhost kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Jan 23 10:44:40 localhost kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Jan 23 10:44:40 localhost kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Jan 23 10:44:40 localhost kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Jan 23 10:44:40 localhost kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Jan 23 10:44:40 localhost kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Jan 23 10:44:40 localhost kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Jan 23 10:44:40 localhost kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Jan 23 10:44:40 localhost kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Jan 23 10:44:40 localhost kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Jan 23 10:44:40 localhost kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Jan 23 10:44:40 localhost kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Jan 23 10:44:40 localhost kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Jan 23 10:44:40 localhost kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Jan 23 10:44:40 localhost kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Jan 23 10:44:40 localhost kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Jan 23 10:44:40 localhost kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Jan 23 10:44:40 localhost kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Jan 23 10:44:40 localhost kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Jan 23 10:44:40 localhost kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Jan 23 10:44:40 localhost kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Jan 23 10:44:40 localhost kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Jan 23 10:44:40 localhost kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Jan 23 10:44:40 localhost kernel: iommu: Default domain type: Translated
Jan 23 10:44:40 localhost kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Jan 23 10:44:40 localhost kernel: SCSI subsystem initialized
Jan 23 10:44:40 localhost kernel: ACPI: bus type USB registered
Jan 23 10:44:40 localhost kernel: usbcore: registered new interface driver usbfs
Jan 23 10:44:40 localhost kernel: usbcore: registered new interface driver hub
Jan 23 10:44:40 localhost kernel: usbcore: registered new device driver usb
Jan 23 10:44:40 localhost kernel: pps_core: LinuxPPS API ver. 1 registered
Jan 23 10:44:40 localhost kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Jan 23 10:44:40 localhost kernel: PTP clock support registered
Jan 23 10:44:40 localhost kernel: EDAC MC: Ver: 3.0.0
Jan 23 10:44:40 localhost kernel: NetLabel: Initializing
Jan 23 10:44:40 localhost kernel: NetLabel:  domain hash size = 128
Jan 23 10:44:40 localhost kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Jan 23 10:44:40 localhost kernel: NetLabel:  unlabeled traffic allowed by default
Jan 23 10:44:40 localhost kernel: PCI: Using ACPI for IRQ routing
Jan 23 10:44:40 localhost kernel: PCI: pci_cache_line_size set to 64 bytes
Jan 23 10:44:40 localhost kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff]
Jan 23 10:44:40 localhost kernel: e820: reserve RAM buffer [mem 0xbffdb000-0xbfffffff]
Jan 23 10:44:40 localhost kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Jan 23 10:44:40 localhost kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Jan 23 10:44:40 localhost kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Jan 23 10:44:40 localhost kernel: vgaarb: loaded
Jan 23 10:44:40 localhost kernel: clocksource: Switched to clocksource kvm-clock
Jan 23 10:44:40 localhost kernel: VFS: Disk quotas dquot_6.6.0
Jan 23 10:44:40 localhost kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Jan 23 10:44:40 localhost kernel: pnp: PnP ACPI init
Jan 23 10:44:40 localhost kernel: pnp 00:03: [dma 2]
Jan 23 10:44:40 localhost kernel: pnp: PnP ACPI: found 5 devices
Jan 23 10:44:40 localhost kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Jan 23 10:44:40 localhost kernel: NET: Registered PF_INET protocol family
Jan 23 10:44:40 localhost kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Jan 23 10:44:40 localhost kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Jan 23 10:44:40 localhost kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Jan 23 10:44:40 localhost kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Jan 23 10:44:40 localhost kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Jan 23 10:44:40 localhost kernel: TCP: Hash tables configured (established 65536 bind 65536)
Jan 23 10:44:40 localhost kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Jan 23 10:44:40 localhost kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Jan 23 10:44:40 localhost kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Jan 23 10:44:40 localhost kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Jan 23 10:44:40 localhost kernel: NET: Registered PF_XDP protocol family
Jan 23 10:44:40 localhost kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Jan 23 10:44:40 localhost kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Jan 23 10:44:40 localhost kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Jan 23 10:44:40 localhost kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Jan 23 10:44:40 localhost kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Jan 23 10:44:40 localhost kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Jan 23 10:44:40 localhost kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Jan 23 10:44:40 localhost kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Jan 23 10:44:40 localhost kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 79620 usecs
Jan 23 10:44:40 localhost kernel: PCI: CLS 0 bytes, default 64
Jan 23 10:44:40 localhost kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Jan 23 10:44:40 localhost kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Jan 23 10:44:40 localhost kernel: Trying to unpack rootfs image as initramfs...
Jan 23 10:44:40 localhost kernel: ACPI: bus type thunderbolt registered
Jan 23 10:44:40 localhost kernel: Initialise system trusted keyrings
Jan 23 10:44:40 localhost kernel: Key type blacklist registered
Jan 23 10:44:40 localhost kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Jan 23 10:44:40 localhost kernel: zbud: loaded
Jan 23 10:44:40 localhost kernel: integrity: Platform Keyring initialized
Jan 23 10:44:40 localhost kernel: integrity: Machine keyring initialized
Jan 23 10:44:40 localhost kernel: Freeing initrd memory: 87956K
Jan 23 10:44:40 localhost kernel: NET: Registered PF_ALG protocol family
Jan 23 10:44:40 localhost kernel: xor: automatically using best checksumming function   avx       
Jan 23 10:44:40 localhost kernel: Key type asymmetric registered
Jan 23 10:44:40 localhost kernel: Asymmetric key parser 'x509' registered
Jan 23 10:44:40 localhost kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Jan 23 10:44:40 localhost kernel: io scheduler mq-deadline registered
Jan 23 10:44:40 localhost kernel: io scheduler kyber registered
Jan 23 10:44:40 localhost kernel: io scheduler bfq registered
Jan 23 10:44:40 localhost kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Jan 23 10:44:40 localhost kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Jan 23 10:44:40 localhost kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Jan 23 10:44:40 localhost kernel: ACPI: button: Power Button [PWRF]
Jan 23 10:44:40 localhost kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Jan 23 10:44:40 localhost kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Jan 23 10:44:40 localhost kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Jan 23 10:44:40 localhost kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Jan 23 10:44:40 localhost kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Jan 23 10:44:40 localhost kernel: Non-volatile memory driver v1.3
Jan 23 10:44:40 localhost kernel: rdac: device handler registered
Jan 23 10:44:40 localhost kernel: hp_sw: device handler registered
Jan 23 10:44:40 localhost kernel: emc: device handler registered
Jan 23 10:44:40 localhost kernel: alua: device handler registered
Jan 23 10:44:40 localhost kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Jan 23 10:44:40 localhost kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Jan 23 10:44:40 localhost kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Jan 23 10:44:40 localhost kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Jan 23 10:44:40 localhost kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Jan 23 10:44:40 localhost kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Jan 23 10:44:40 localhost kernel: usb usb1: Product: UHCI Host Controller
Jan 23 10:44:40 localhost kernel: usb usb1: Manufacturer: Linux 5.14.0-661.el9.x86_64 uhci_hcd
Jan 23 10:44:40 localhost kernel: usb usb1: SerialNumber: 0000:00:01.2
Jan 23 10:44:40 localhost kernel: hub 1-0:1.0: USB hub found
Jan 23 10:44:40 localhost kernel: hub 1-0:1.0: 2 ports detected
Jan 23 10:44:40 localhost kernel: usbcore: registered new interface driver usbserial_generic
Jan 23 10:44:40 localhost kernel: usbserial: USB Serial support registered for generic
Jan 23 10:44:40 localhost kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Jan 23 10:44:40 localhost kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Jan 23 10:44:40 localhost kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Jan 23 10:44:40 localhost kernel: mousedev: PS/2 mouse device common for all mice
Jan 23 10:44:40 localhost kernel: rtc_cmos 00:04: RTC can wake from S4
Jan 23 10:44:40 localhost kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Jan 23 10:44:40 localhost kernel: rtc_cmos 00:04: registered as rtc0
Jan 23 10:44:40 localhost kernel: rtc_cmos 00:04: setting system clock to 2026-01-23T10:44:39 UTC (1769165079)
Jan 23 10:44:40 localhost kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Jan 23 10:44:40 localhost kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Jan 23 10:44:40 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Jan 23 10:44:40 localhost kernel: hid: raw HID events driver (C) Jiri Kosina
Jan 23 10:44:40 localhost kernel: usbcore: registered new interface driver usbhid
Jan 23 10:44:40 localhost kernel: usbhid: USB HID core driver
Jan 23 10:44:40 localhost kernel: drop_monitor: Initializing network drop monitor service
Jan 23 10:44:40 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Jan 23 10:44:40 localhost kernel: Initializing XFRM netlink socket
Jan 23 10:44:40 localhost kernel: NET: Registered PF_INET6 protocol family
Jan 23 10:44:40 localhost kernel: Segment Routing with IPv6
Jan 23 10:44:40 localhost kernel: NET: Registered PF_PACKET protocol family
Jan 23 10:44:40 localhost kernel: mpls_gso: MPLS GSO support
Jan 23 10:44:40 localhost kernel: IPI shorthand broadcast: enabled
Jan 23 10:44:40 localhost kernel: AVX2 version of gcm_enc/dec engaged.
Jan 23 10:44:40 localhost kernel: AES CTR mode by8 optimization enabled
Jan 23 10:44:40 localhost kernel: sched_clock: Marking stable (1606002289, 150429330)->(1966538489, -210106870)
Jan 23 10:44:40 localhost kernel: registered taskstats version 1
Jan 23 10:44:40 localhost kernel: Loading compiled-in X.509 certificates
Jan 23 10:44:40 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 04453f216699002fd63185eeab832de990bee6d7'
Jan 23 10:44:40 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Jan 23 10:44:40 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Jan 23 10:44:40 localhost kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Jan 23 10:44:40 localhost kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Jan 23 10:44:40 localhost kernel: Demotion targets for Node 0: null
Jan 23 10:44:40 localhost kernel: page_owner is disabled
Jan 23 10:44:40 localhost kernel: Key type .fscrypt registered
Jan 23 10:44:40 localhost kernel: Key type fscrypt-provisioning registered
Jan 23 10:44:40 localhost kernel: Key type big_key registered
Jan 23 10:44:40 localhost kernel: Key type encrypted registered
Jan 23 10:44:40 localhost kernel: ima: No TPM chip found, activating TPM-bypass!
Jan 23 10:44:40 localhost kernel: Loading compiled-in module X.509 certificates
Jan 23 10:44:40 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 04453f216699002fd63185eeab832de990bee6d7'
Jan 23 10:44:40 localhost kernel: ima: Allocated hash algorithm: sha256
Jan 23 10:44:40 localhost kernel: ima: No architecture policies found
Jan 23 10:44:40 localhost kernel: evm: Initialising EVM extended attributes:
Jan 23 10:44:40 localhost kernel: evm: security.selinux
Jan 23 10:44:40 localhost kernel: evm: security.SMACK64 (disabled)
Jan 23 10:44:40 localhost kernel: evm: security.SMACK64EXEC (disabled)
Jan 23 10:44:40 localhost kernel: evm: security.SMACK64TRANSMUTE (disabled)
Jan 23 10:44:40 localhost kernel: evm: security.SMACK64MMAP (disabled)
Jan 23 10:44:40 localhost kernel: evm: security.apparmor (disabled)
Jan 23 10:44:40 localhost kernel: evm: security.ima
Jan 23 10:44:40 localhost kernel: evm: security.capability
Jan 23 10:44:40 localhost kernel: evm: HMAC attrs: 0x1
Jan 23 10:44:40 localhost kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Jan 23 10:44:40 localhost kernel: Running certificate verification RSA selftest
Jan 23 10:44:40 localhost kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Jan 23 10:44:40 localhost kernel: Running certificate verification ECDSA selftest
Jan 23 10:44:40 localhost kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Jan 23 10:44:40 localhost kernel: clk: Disabling unused clocks
Jan 23 10:44:40 localhost kernel: Freeing unused decrypted memory: 2028K
Jan 23 10:44:40 localhost kernel: Freeing unused kernel image (initmem) memory: 4200K
Jan 23 10:44:40 localhost kernel: Write protecting the kernel read-only data: 30720k
Jan 23 10:44:40 localhost kernel: Freeing unused kernel image (rodata/data gap) memory: 420K
Jan 23 10:44:40 localhost kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Jan 23 10:44:40 localhost kernel: Run /init as init process
Jan 23 10:44:40 localhost kernel:   with arguments:
Jan 23 10:44:40 localhost kernel:     /init
Jan 23 10:44:40 localhost kernel:   with environment:
Jan 23 10:44:40 localhost kernel:     HOME=/
Jan 23 10:44:40 localhost kernel:     TERM=linux
Jan 23 10:44:40 localhost kernel:     BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64
Jan 23 10:44:40 localhost systemd[1]: systemd 252-64.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Jan 23 10:44:40 localhost systemd[1]: Detected virtualization kvm.
Jan 23 10:44:40 localhost systemd[1]: Detected architecture x86-64.
Jan 23 10:44:40 localhost systemd[1]: Running in initrd.
Jan 23 10:44:40 localhost systemd[1]: No hostname configured, using default hostname.
Jan 23 10:44:40 localhost systemd[1]: Hostname set to <localhost>.
Jan 23 10:44:40 localhost systemd[1]: Initializing machine ID from VM UUID.
Jan 23 10:44:40 localhost kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Jan 23 10:44:40 localhost kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Jan 23 10:44:40 localhost kernel: usb 1-1: Product: QEMU USB Tablet
Jan 23 10:44:40 localhost kernel: usb 1-1: Manufacturer: QEMU
Jan 23 10:44:40 localhost kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Jan 23 10:44:40 localhost kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Jan 23 10:44:40 localhost kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Jan 23 10:44:40 localhost systemd[1]: Queued start job for default target Initrd Default Target.
Jan 23 10:44:40 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Jan 23 10:44:40 localhost systemd[1]: Reached target Local Encrypted Volumes.
Jan 23 10:44:40 localhost systemd[1]: Reached target Initrd /usr File System.
Jan 23 10:44:40 localhost systemd[1]: Reached target Local File Systems.
Jan 23 10:44:40 localhost systemd[1]: Reached target Path Units.
Jan 23 10:44:40 localhost systemd[1]: Reached target Slice Units.
Jan 23 10:44:40 localhost systemd[1]: Reached target Swaps.
Jan 23 10:44:40 localhost systemd[1]: Reached target Timer Units.
Jan 23 10:44:40 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Jan 23 10:44:40 localhost systemd[1]: Listening on Journal Socket (/dev/log).
Jan 23 10:44:40 localhost systemd[1]: Listening on Journal Socket.
Jan 23 10:44:40 localhost systemd[1]: Listening on udev Control Socket.
Jan 23 10:44:40 localhost systemd[1]: Listening on udev Kernel Socket.
Jan 23 10:44:40 localhost systemd[1]: Reached target Socket Units.
Jan 23 10:44:40 localhost systemd[1]: Starting Create List of Static Device Nodes...
Jan 23 10:44:40 localhost systemd[1]: Starting Journal Service...
Jan 23 10:44:40 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Jan 23 10:44:40 localhost systemd[1]: Starting Apply Kernel Variables...
Jan 23 10:44:40 localhost systemd[1]: Starting Create System Users...
Jan 23 10:44:40 localhost systemd[1]: Starting Setup Virtual Console...
Jan 23 10:44:40 localhost systemd[1]: Finished Create List of Static Device Nodes.
Jan 23 10:44:40 localhost systemd[1]: Finished Create System Users.
Jan 23 10:44:40 localhost systemd[1]: Finished Apply Kernel Variables.
Jan 23 10:44:40 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Jan 23 10:44:40 localhost systemd-journald[307]: Journal started
Jan 23 10:44:40 localhost systemd-journald[307]: Runtime Journal (/run/log/journal/850fefef41624be1a464e0586d5e52c6) is 8.0M, max 153.6M, 145.6M free.
Jan 23 10:44:40 localhost systemd-sysusers[311]: Creating group 'users' with GID 100.
Jan 23 10:44:40 localhost systemd-sysusers[311]: Creating group 'dbus' with GID 81.
Jan 23 10:44:40 localhost systemd-sysusers[311]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Jan 23 10:44:40 localhost systemd[1]: Started Journal Service.
Jan 23 10:44:40 localhost systemd[1]: Starting Create Volatile Files and Directories...
Jan 23 10:44:40 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Jan 23 10:44:40 localhost systemd[1]: Finished Create Volatile Files and Directories.
Jan 23 10:44:40 localhost systemd[1]: Finished Setup Virtual Console.
Jan 23 10:44:40 localhost systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Jan 23 10:44:40 localhost systemd[1]: Starting dracut cmdline hook...
Jan 23 10:44:40 localhost dracut-cmdline[327]: dracut-9 dracut-057-102.git20250818.el9
Jan 23 10:44:40 localhost dracut-cmdline[327]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64 root=UUID=22ac9141-3960-4912-b20e-19fc8a328d40 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 23 10:44:40 localhost systemd[1]: Finished dracut cmdline hook.
Jan 23 10:44:40 localhost systemd[1]: Starting dracut pre-udev hook...
Jan 23 10:44:40 localhost kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Jan 23 10:44:40 localhost kernel: device-mapper: uevent: version 1.0.3
Jan 23 10:44:40 localhost kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Jan 23 10:44:40 localhost kernel: RPC: Registered named UNIX socket transport module.
Jan 23 10:44:40 localhost kernel: RPC: Registered udp transport module.
Jan 23 10:44:40 localhost kernel: RPC: Registered tcp transport module.
Jan 23 10:44:40 localhost kernel: RPC: Registered tcp-with-tls transport module.
Jan 23 10:44:40 localhost kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Jan 23 10:44:40 localhost rpc.statd[444]: Version 2.5.4 starting
Jan 23 10:44:40 localhost rpc.statd[444]: Initializing NSM state
Jan 23 10:44:40 localhost rpc.idmapd[449]: Setting log level to 0
Jan 23 10:44:40 localhost systemd[1]: Finished dracut pre-udev hook.
Jan 23 10:44:40 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Jan 23 10:44:40 localhost systemd-udevd[462]: Using default interface naming scheme 'rhel-9.0'.
Jan 23 10:44:40 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Jan 23 10:44:40 localhost systemd[1]: Starting dracut pre-trigger hook...
Jan 23 10:44:40 localhost systemd[1]: Finished dracut pre-trigger hook.
Jan 23 10:44:40 localhost systemd[1]: Starting Coldplug All udev Devices...
Jan 23 10:44:40 localhost systemd[1]: Created slice Slice /system/modprobe.
Jan 23 10:44:40 localhost systemd[1]: Starting Load Kernel Module configfs...
Jan 23 10:44:40 localhost systemd[1]: Finished Coldplug All udev Devices.
Jan 23 10:44:40 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 23 10:44:40 localhost systemd[1]: Finished Load Kernel Module configfs.
Jan 23 10:44:40 localhost systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Jan 23 10:44:40 localhost systemd[1]: Reached target Network.
Jan 23 10:44:40 localhost systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Jan 23 10:44:40 localhost systemd[1]: Starting dracut initqueue hook...
Jan 23 10:44:40 localhost kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Jan 23 10:44:40 localhost kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Jan 23 10:44:40 localhost kernel: libata version 3.00 loaded.
Jan 23 10:44:40 localhost kernel: ata_piix 0000:00:01.1: version 2.13
Jan 23 10:44:40 localhost kernel:  vda: vda1
Jan 23 10:44:40 localhost systemd-udevd[478]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 10:44:40 localhost kernel: scsi host0: ata_piix
Jan 23 10:44:40 localhost kernel: scsi host1: ata_piix
Jan 23 10:44:40 localhost kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Jan 23 10:44:40 localhost kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Jan 23 10:44:40 localhost systemd[1]: Found device /dev/disk/by-uuid/22ac9141-3960-4912-b20e-19fc8a328d40.
Jan 23 10:44:41 localhost systemd[1]: Reached target Initrd Root Device.
Jan 23 10:44:41 localhost systemd[1]: Mounting Kernel Configuration File System...
Jan 23 10:44:41 localhost kernel: ata1: found unknown device (class 0)
Jan 23 10:44:41 localhost kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Jan 23 10:44:41 localhost kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Jan 23 10:44:41 localhost systemd[1]: Mounted Kernel Configuration File System.
Jan 23 10:44:41 localhost systemd[1]: Reached target System Initialization.
Jan 23 10:44:41 localhost systemd[1]: Reached target Basic System.
Jan 23 10:44:41 localhost kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Jan 23 10:44:41 localhost kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Jan 23 10:44:41 localhost kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Jan 23 10:44:41 localhost kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0
Jan 23 10:44:41 localhost systemd[1]: Finished dracut initqueue hook.
Jan 23 10:44:41 localhost systemd[1]: Reached target Preparation for Remote File Systems.
Jan 23 10:44:41 localhost systemd[1]: Reached target Remote Encrypted Volumes.
Jan 23 10:44:41 localhost systemd[1]: Reached target Remote File Systems.
Jan 23 10:44:41 localhost systemd[1]: Starting dracut pre-mount hook...
Jan 23 10:44:41 localhost systemd[1]: Finished dracut pre-mount hook.
Jan 23 10:44:41 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/22ac9141-3960-4912-b20e-19fc8a328d40...
Jan 23 10:44:41 localhost systemd-fsck[557]: /usr/sbin/fsck.xfs: XFS file system.
Jan 23 10:44:41 localhost systemd[1]: Finished File System Check on /dev/disk/by-uuid/22ac9141-3960-4912-b20e-19fc8a328d40.
Jan 23 10:44:41 localhost systemd[1]: Mounting /sysroot...
Jan 23 10:44:41 localhost kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Jan 23 10:44:41 localhost kernel: XFS (vda1): Mounting V5 Filesystem 22ac9141-3960-4912-b20e-19fc8a328d40
Jan 23 10:44:41 localhost kernel: XFS (vda1): Ending clean mount
Jan 23 10:44:41 localhost systemd[1]: Mounted /sysroot.
Jan 23 10:44:41 localhost systemd[1]: Reached target Initrd Root File System.
Jan 23 10:44:41 localhost systemd[1]: Starting Mountpoints Configured in the Real Root...
Jan 23 10:44:41 localhost systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Jan 23 10:44:41 localhost systemd[1]: Finished Mountpoints Configured in the Real Root.
Jan 23 10:44:41 localhost systemd[1]: Reached target Initrd File Systems.
Jan 23 10:44:41 localhost systemd[1]: Reached target Initrd Default Target.
Jan 23 10:44:41 localhost systemd[1]: Starting dracut mount hook...
Jan 23 10:44:41 localhost systemd[1]: Finished dracut mount hook.
Jan 23 10:44:41 localhost systemd[1]: Starting dracut pre-pivot and cleanup hook...
Jan 23 10:44:42 localhost rpc.idmapd[449]: exiting on signal 15
Jan 23 10:44:42 localhost systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Jan 23 10:44:42 localhost systemd[1]: Finished dracut pre-pivot and cleanup hook.
Jan 23 10:44:42 localhost systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Jan 23 10:44:42 localhost systemd[1]: Stopped target Network.
Jan 23 10:44:42 localhost systemd[1]: Stopped target Remote Encrypted Volumes.
Jan 23 10:44:42 localhost systemd[1]: Stopped target Timer Units.
Jan 23 10:44:42 localhost systemd[1]: dbus.socket: Deactivated successfully.
Jan 23 10:44:42 localhost systemd[1]: Closed D-Bus System Message Bus Socket.
Jan 23 10:44:42 localhost systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Jan 23 10:44:42 localhost systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Jan 23 10:44:42 localhost systemd[1]: Stopped target Initrd Default Target.
Jan 23 10:44:42 localhost systemd[1]: Stopped target Basic System.
Jan 23 10:44:42 localhost systemd[1]: Stopped target Initrd Root Device.
Jan 23 10:44:42 localhost systemd[1]: Stopped target Initrd /usr File System.
Jan 23 10:44:42 localhost systemd[1]: Stopped target Path Units.
Jan 23 10:44:42 localhost systemd[1]: Stopped target Remote File Systems.
Jan 23 10:44:42 localhost systemd[1]: Stopped target Preparation for Remote File Systems.
Jan 23 10:44:42 localhost systemd[1]: Stopped target Slice Units.
Jan 23 10:44:42 localhost systemd[1]: Stopped target Socket Units.
Jan 23 10:44:42 localhost systemd[1]: Stopped target System Initialization.
Jan 23 10:44:42 localhost systemd[1]: Stopped target Local File Systems.
Jan 23 10:44:42 localhost systemd[1]: Stopped target Swaps.
Jan 23 10:44:42 localhost systemd[1]: dracut-mount.service: Deactivated successfully.
Jan 23 10:44:42 localhost systemd[1]: Stopped dracut mount hook.
Jan 23 10:44:42 localhost systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Jan 23 10:44:42 localhost systemd[1]: Stopped dracut pre-mount hook.
Jan 23 10:44:42 localhost systemd[1]: Stopped target Local Encrypted Volumes.
Jan 23 10:44:42 localhost systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Jan 23 10:44:42 localhost systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Jan 23 10:44:42 localhost systemd[1]: dracut-initqueue.service: Deactivated successfully.
Jan 23 10:44:42 localhost systemd[1]: Stopped dracut initqueue hook.
Jan 23 10:44:42 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully.
Jan 23 10:44:42 localhost systemd[1]: Stopped Apply Kernel Variables.
Jan 23 10:44:42 localhost systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Jan 23 10:44:42 localhost systemd[1]: Stopped Create Volatile Files and Directories.
Jan 23 10:44:42 localhost systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Jan 23 10:44:42 localhost systemd[1]: Stopped Coldplug All udev Devices.
Jan 23 10:44:42 localhost systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Jan 23 10:44:42 localhost systemd[1]: Stopped dracut pre-trigger hook.
Jan 23 10:44:42 localhost systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Jan 23 10:44:42 localhost systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Jan 23 10:44:42 localhost systemd[1]: Stopped Setup Virtual Console.
Jan 23 10:44:42 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Jan 23 10:44:42 localhost systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Jan 23 10:44:42 localhost systemd[1]: initrd-cleanup.service: Deactivated successfully.
Jan 23 10:44:42 localhost systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Jan 23 10:44:42 localhost systemd[1]: systemd-udevd.service: Deactivated successfully.
Jan 23 10:44:42 localhost systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Jan 23 10:44:42 localhost systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Jan 23 10:44:42 localhost systemd[1]: Closed udev Control Socket.
Jan 23 10:44:42 localhost systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Jan 23 10:44:42 localhost systemd[1]: Closed udev Kernel Socket.
Jan 23 10:44:42 localhost systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Jan 23 10:44:42 localhost systemd[1]: Stopped dracut pre-udev hook.
Jan 23 10:44:42 localhost systemd[1]: dracut-cmdline.service: Deactivated successfully.
Jan 23 10:44:42 localhost systemd[1]: Stopped dracut cmdline hook.
Jan 23 10:44:42 localhost systemd[1]: Starting Cleanup udev Database...
Jan 23 10:44:42 localhost systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Jan 23 10:44:42 localhost systemd[1]: Stopped Create Static Device Nodes in /dev.
Jan 23 10:44:42 localhost systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Jan 23 10:44:42 localhost systemd[1]: Stopped Create List of Static Device Nodes.
Jan 23 10:44:42 localhost systemd[1]: systemd-sysusers.service: Deactivated successfully.
Jan 23 10:44:42 localhost systemd[1]: Stopped Create System Users.
Jan 23 10:44:42 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Jan 23 10:44:42 localhost systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Jan 23 10:44:42 localhost systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Jan 23 10:44:42 localhost systemd[1]: Finished Cleanup udev Database.
Jan 23 10:44:42 localhost systemd[1]: Reached target Switch Root.
Jan 23 10:44:42 localhost systemd[1]: Starting Switch Root...
Jan 23 10:44:42 localhost systemd[1]: Switching root.
Jan 23 10:44:42 localhost systemd-journald[307]: Journal stopped
Jan 23 10:44:42 localhost systemd-journald[307]: Received SIGTERM from PID 1 (systemd).
Jan 23 10:44:42 localhost kernel: audit: type=1404 audit(1769165082.270:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Jan 23 10:44:42 localhost kernel: SELinux:  policy capability network_peer_controls=1
Jan 23 10:44:42 localhost kernel: SELinux:  policy capability open_perms=1
Jan 23 10:44:42 localhost kernel: SELinux:  policy capability extended_socket_class=1
Jan 23 10:44:42 localhost kernel: SELinux:  policy capability always_check_network=0
Jan 23 10:44:42 localhost kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 23 10:44:42 localhost kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 23 10:44:42 localhost kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 23 10:44:42 localhost kernel: audit: type=1403 audit(1769165082.409:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Jan 23 10:44:42 localhost systemd[1]: Successfully loaded SELinux policy in 141.490ms.
Jan 23 10:44:42 localhost systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 26.740ms.
Jan 23 10:44:42 localhost systemd[1]: systemd 252-64.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Jan 23 10:44:42 localhost systemd[1]: Detected virtualization kvm.
Jan 23 10:44:42 localhost systemd[1]: Detected architecture x86-64.
Jan 23 10:44:42 localhost systemd-rc-local-generator[638]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 10:44:42 localhost systemd[1]: initrd-switch-root.service: Deactivated successfully.
Jan 23 10:44:42 localhost systemd[1]: Stopped Switch Root.
Jan 23 10:44:42 localhost systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Jan 23 10:44:42 localhost systemd[1]: Created slice Slice /system/getty.
Jan 23 10:44:42 localhost systemd[1]: Created slice Slice /system/serial-getty.
Jan 23 10:44:42 localhost systemd[1]: Created slice Slice /system/sshd-keygen.
Jan 23 10:44:42 localhost systemd[1]: Created slice User and Session Slice.
Jan 23 10:44:42 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Jan 23 10:44:42 localhost systemd[1]: Started Forward Password Requests to Wall Directory Watch.
Jan 23 10:44:42 localhost systemd[1]: Set up automount Arbitrary Executable File Formats File System Automount Point.
Jan 23 10:44:42 localhost systemd[1]: Reached target Local Encrypted Volumes.
Jan 23 10:44:42 localhost systemd[1]: Stopped target Switch Root.
Jan 23 10:44:42 localhost systemd[1]: Stopped target Initrd File Systems.
Jan 23 10:44:42 localhost systemd[1]: Stopped target Initrd Root File System.
Jan 23 10:44:42 localhost systemd[1]: Reached target Local Integrity Protected Volumes.
Jan 23 10:44:42 localhost systemd[1]: Reached target Path Units.
Jan 23 10:44:42 localhost systemd[1]: Reached target rpc_pipefs.target.
Jan 23 10:44:42 localhost systemd[1]: Reached target Slice Units.
Jan 23 10:44:42 localhost systemd[1]: Reached target Swaps.
Jan 23 10:44:42 localhost systemd[1]: Reached target Local Verity Protected Volumes.
Jan 23 10:44:42 localhost systemd[1]: Listening on RPCbind Server Activation Socket.
Jan 23 10:44:42 localhost systemd[1]: Reached target RPC Port Mapper.
Jan 23 10:44:42 localhost systemd[1]: Listening on Process Core Dump Socket.
Jan 23 10:44:42 localhost systemd[1]: Listening on initctl Compatibility Named Pipe.
Jan 23 10:44:42 localhost systemd[1]: Listening on udev Control Socket.
Jan 23 10:44:42 localhost systemd[1]: Listening on udev Kernel Socket.
Jan 23 10:44:42 localhost systemd[1]: Mounting Huge Pages File System...
Jan 23 10:44:42 localhost systemd[1]: Mounting POSIX Message Queue File System...
Jan 23 10:44:42 localhost systemd[1]: Mounting Kernel Debug File System...
Jan 23 10:44:42 localhost systemd[1]: Mounting Kernel Trace File System...
Jan 23 10:44:42 localhost systemd[1]: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Jan 23 10:44:42 localhost systemd[1]: Starting Create List of Static Device Nodes...
Jan 23 10:44:42 localhost systemd[1]: Starting Load Kernel Module configfs...
Jan 23 10:44:42 localhost systemd[1]: Starting Load Kernel Module drm...
Jan 23 10:44:42 localhost systemd[1]: Starting Load Kernel Module efi_pstore...
Jan 23 10:44:42 localhost systemd[1]: Starting Load Kernel Module fuse...
Jan 23 10:44:42 localhost systemd[1]: Starting Read and set NIS domainname from /etc/sysconfig/network...
Jan 23 10:44:42 localhost systemd[1]: systemd-fsck-root.service: Deactivated successfully.
Jan 23 10:44:42 localhost systemd[1]: Stopped File System Check on Root Device.
Jan 23 10:44:42 localhost systemd[1]: Stopped Journal Service.
Jan 23 10:44:42 localhost systemd[1]: Starting Journal Service...
Jan 23 10:44:42 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Jan 23 10:44:42 localhost systemd[1]: Starting Generate network units from Kernel command line...
Jan 23 10:44:42 localhost systemd[1]: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jan 23 10:44:42 localhost systemd[1]: Starting Remount Root and Kernel File Systems...
Jan 23 10:44:42 localhost systemd[1]: Repartition Root Disk was skipped because no trigger condition checks were met.
Jan 23 10:44:42 localhost systemd[1]: Starting Apply Kernel Variables...
Jan 23 10:44:42 localhost kernel: fuse: init (API version 7.37)
Jan 23 10:44:42 localhost systemd[1]: Starting Coldplug All udev Devices...
Jan 23 10:44:42 localhost kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Jan 23 10:44:42 localhost systemd-journald[679]: Journal started
Jan 23 10:44:42 localhost systemd-journald[679]: Runtime Journal (/run/log/journal/85ac68c10a6e7ae08ceb898dbdca0cb5) is 8.0M, max 153.6M, 145.6M free.
Jan 23 10:44:42 localhost systemd[1]: Queued start job for default target Multi-User System.
Jan 23 10:44:42 localhost systemd[1]: systemd-journald.service: Deactivated successfully.
Jan 23 10:44:42 localhost systemd[1]: Started Journal Service.
Jan 23 10:44:42 localhost systemd[1]: Mounted Huge Pages File System.
Jan 23 10:44:42 localhost systemd[1]: Mounted POSIX Message Queue File System.
Jan 23 10:44:42 localhost systemd[1]: Mounted Kernel Debug File System.
Jan 23 10:44:42 localhost systemd[1]: Mounted Kernel Trace File System.
Jan 23 10:44:42 localhost systemd[1]: Finished Create List of Static Device Nodes.
Jan 23 10:44:42 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 23 10:44:42 localhost systemd[1]: Finished Load Kernel Module configfs.
Jan 23 10:44:42 localhost systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Jan 23 10:44:42 localhost systemd[1]: Finished Load Kernel Module efi_pstore.
Jan 23 10:44:42 localhost systemd[1]: modprobe@fuse.service: Deactivated successfully.
Jan 23 10:44:42 localhost systemd[1]: Finished Load Kernel Module fuse.
Jan 23 10:44:42 localhost systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Jan 23 10:44:42 localhost systemd[1]: Finished Generate network units from Kernel command line.
Jan 23 10:44:42 localhost systemd[1]: Finished Remount Root and Kernel File Systems.
Jan 23 10:44:42 localhost systemd[1]: Finished Apply Kernel Variables.
Jan 23 10:44:42 localhost kernel: ACPI: bus type drm_connector registered
Jan 23 10:44:42 localhost systemd[1]: Mounting FUSE Control File System...
Jan 23 10:44:42 localhost systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Jan 23 10:44:42 localhost systemd[1]: Starting Rebuild Hardware Database...
Jan 23 10:44:42 localhost systemd[1]: Starting Flush Journal to Persistent Storage...
Jan 23 10:44:42 localhost systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Jan 23 10:44:42 localhost systemd[1]: Starting Load/Save OS Random Seed...
Jan 23 10:44:42 localhost systemd-journald[679]: Runtime Journal (/run/log/journal/85ac68c10a6e7ae08ceb898dbdca0cb5) is 8.0M, max 153.6M, 145.6M free.
Jan 23 10:44:42 localhost systemd-journald[679]: Received client request to flush runtime journal.
Jan 23 10:44:42 localhost systemd[1]: Starting Create System Users...
Jan 23 10:44:42 localhost systemd[1]: modprobe@drm.service: Deactivated successfully.
Jan 23 10:44:42 localhost systemd[1]: Finished Load Kernel Module drm.
Jan 23 10:44:42 localhost systemd[1]: Mounted FUSE Control File System.
Jan 23 10:44:42 localhost systemd[1]: Finished Flush Journal to Persistent Storage.
Jan 23 10:44:42 localhost systemd[1]: Finished Load/Save OS Random Seed.
Jan 23 10:44:42 localhost systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Jan 23 10:44:42 localhost systemd[1]: Finished Create System Users.
Jan 23 10:44:42 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Jan 23 10:44:42 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Jan 23 10:44:42 localhost systemd[1]: Reached target Preparation for Local File Systems.
Jan 23 10:44:42 localhost systemd[1]: Reached target Local File Systems.
Jan 23 10:44:42 localhost systemd[1]: Starting Rebuild Dynamic Linker Cache...
Jan 23 10:44:42 localhost systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Jan 23 10:44:42 localhost systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Jan 23 10:44:42 localhost systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Jan 23 10:44:42 localhost systemd[1]: Starting Automatic Boot Loader Update...
Jan 23 10:44:42 localhost systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Jan 23 10:44:42 localhost systemd[1]: Starting Create Volatile Files and Directories...
Jan 23 10:44:42 localhost systemd[1]: Finished Coldplug All udev Devices.
Jan 23 10:44:42 localhost bootctl[697]: Couldn't find EFI system partition, skipping.
Jan 23 10:44:42 localhost systemd[1]: Finished Automatic Boot Loader Update.
Jan 23 10:44:43 localhost systemd[1]: Finished Create Volatile Files and Directories.
Jan 23 10:44:43 localhost systemd[1]: Starting Security Auditing Service...
Jan 23 10:44:43 localhost systemd[1]: Starting RPC Bind...
Jan 23 10:44:43 localhost systemd[1]: Starting Rebuild Journal Catalog...
Jan 23 10:44:43 localhost auditd[703]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Jan 23 10:44:43 localhost auditd[703]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Jan 23 10:44:43 localhost systemd[1]: Finished Rebuild Journal Catalog.
Jan 23 10:44:43 localhost augenrules[708]: /sbin/augenrules: No change
Jan 23 10:44:43 localhost systemd[1]: Started RPC Bind.
Jan 23 10:44:43 localhost augenrules[723]: No rules
Jan 23 10:44:43 localhost augenrules[723]: enabled 1
Jan 23 10:44:43 localhost augenrules[723]: failure 1
Jan 23 10:44:43 localhost augenrules[723]: pid 703
Jan 23 10:44:43 localhost augenrules[723]: rate_limit 0
Jan 23 10:44:43 localhost augenrules[723]: backlog_limit 8192
Jan 23 10:44:43 localhost augenrules[723]: lost 0
Jan 23 10:44:43 localhost augenrules[723]: backlog 0
Jan 23 10:44:43 localhost augenrules[723]: backlog_wait_time 60000
Jan 23 10:44:43 localhost augenrules[723]: backlog_wait_time_actual 0
Jan 23 10:44:43 localhost augenrules[723]: enabled 1
Jan 23 10:44:43 localhost augenrules[723]: failure 1
Jan 23 10:44:43 localhost augenrules[723]: pid 703
Jan 23 10:44:43 localhost augenrules[723]: rate_limit 0
Jan 23 10:44:43 localhost augenrules[723]: backlog_limit 8192
Jan 23 10:44:43 localhost augenrules[723]: lost 0
Jan 23 10:44:43 localhost augenrules[723]: backlog 0
Jan 23 10:44:43 localhost augenrules[723]: backlog_wait_time 60000
Jan 23 10:44:43 localhost augenrules[723]: backlog_wait_time_actual 0
Jan 23 10:44:43 localhost augenrules[723]: enabled 1
Jan 23 10:44:43 localhost augenrules[723]: failure 1
Jan 23 10:44:43 localhost augenrules[723]: pid 703
Jan 23 10:44:43 localhost augenrules[723]: rate_limit 0
Jan 23 10:44:43 localhost augenrules[723]: backlog_limit 8192
Jan 23 10:44:43 localhost augenrules[723]: lost 0
Jan 23 10:44:43 localhost augenrules[723]: backlog 0
Jan 23 10:44:43 localhost augenrules[723]: backlog_wait_time 60000
Jan 23 10:44:43 localhost augenrules[723]: backlog_wait_time_actual 0
Jan 23 10:44:43 localhost systemd[1]: Started Security Auditing Service.
Jan 23 10:44:43 localhost systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Jan 23 10:44:43 localhost systemd[1]: Finished Rebuild Dynamic Linker Cache.
Jan 23 10:44:43 localhost systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Jan 23 10:44:43 localhost systemd[1]: Finished Rebuild Hardware Database.
Jan 23 10:44:43 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Jan 23 10:44:43 localhost systemd[1]: Starting Update is Completed...
Jan 23 10:44:43 localhost systemd[1]: Finished Update is Completed.
Jan 23 10:44:43 localhost systemd-udevd[731]: Using default interface naming scheme 'rhel-9.0'.
Jan 23 10:44:43 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Jan 23 10:44:43 localhost systemd[1]: Reached target System Initialization.
Jan 23 10:44:43 localhost systemd[1]: Started dnf makecache --timer.
Jan 23 10:44:43 localhost systemd[1]: Started Daily rotation of log files.
Jan 23 10:44:43 localhost systemd[1]: Started Daily Cleanup of Temporary Directories.
Jan 23 10:44:43 localhost systemd[1]: Reached target Timer Units.
Jan 23 10:44:43 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Jan 23 10:44:43 localhost systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Jan 23 10:44:43 localhost systemd[1]: Reached target Socket Units.
Jan 23 10:44:43 localhost systemd[1]: Starting D-Bus System Message Bus...
Jan 23 10:44:43 localhost systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jan 23 10:44:43 localhost systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Jan 23 10:44:43 localhost systemd[1]: Starting Load Kernel Module configfs...
Jan 23 10:44:43 localhost systemd-udevd[752]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 10:44:43 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 23 10:44:43 localhost systemd[1]: Finished Load Kernel Module configfs.
Jan 23 10:44:43 localhost systemd[1]: Started D-Bus System Message Bus.
Jan 23 10:44:43 localhost systemd[1]: Reached target Basic System.
Jan 23 10:44:43 localhost dbus-broker-lau[751]: Ready
Jan 23 10:44:43 localhost systemd[1]: Starting NTP client/server...
Jan 23 10:44:43 localhost systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Jan 23 10:44:43 localhost systemd[1]: Starting Restore /run/initramfs on shutdown...
Jan 23 10:44:43 localhost systemd[1]: Starting IPv4 firewall with iptables...
Jan 23 10:44:43 localhost kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Jan 23 10:44:43 localhost kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Jan 23 10:44:43 localhost kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Jan 23 10:44:43 localhost systemd[1]: Started irqbalance daemon.
Jan 23 10:44:43 localhost systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Jan 23 10:44:43 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 23 10:44:43 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 23 10:44:43 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 23 10:44:43 localhost systemd[1]: Reached target sshd-keygen.target.
Jan 23 10:44:43 localhost systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Jan 23 10:44:43 localhost systemd[1]: Reached target User and Group Name Lookups.
Jan 23 10:44:43 localhost chronyd[792]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Jan 23 10:44:43 localhost chronyd[792]: Loaded 0 symmetric keys
Jan 23 10:44:43 localhost chronyd[792]: Using right/UTC timezone to obtain leap second data
Jan 23 10:44:43 localhost chronyd[792]: Loaded seccomp filter (level 2)
Jan 23 10:44:43 localhost kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Jan 23 10:44:43 localhost systemd[1]: Starting User Login Management...
Jan 23 10:44:43 localhost systemd[1]: Started NTP client/server.
Jan 23 10:44:43 localhost systemd[1]: Finished Restore /run/initramfs on shutdown.
Jan 23 10:44:43 localhost kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Jan 23 10:44:43 localhost kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Jan 23 10:44:43 localhost kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Jan 23 10:44:43 localhost kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Jan 23 10:44:43 localhost systemd-logind[798]: Watching system buttons on /dev/input/event0 (Power Button)
Jan 23 10:44:43 localhost systemd-logind[798]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Jan 23 10:44:43 localhost kernel: Console: switching to colour dummy device 80x25
Jan 23 10:44:43 localhost kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Jan 23 10:44:43 localhost kernel: [drm] features: -context_init
Jan 23 10:44:43 localhost kernel: [drm] number of scanouts: 1
Jan 23 10:44:43 localhost kernel: [drm] number of cap sets: 0
Jan 23 10:44:43 localhost systemd-logind[798]: New seat seat0.
Jan 23 10:44:43 localhost systemd[1]: Started User Login Management.
Jan 23 10:44:43 localhost kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Jan 23 10:44:43 localhost kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Jan 23 10:44:43 localhost kernel: Console: switching to colour frame buffer device 128x48
Jan 23 10:44:43 localhost kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Jan 23 10:44:43 localhost kernel: kvm_amd: TSC scaling supported
Jan 23 10:44:43 localhost kernel: kvm_amd: Nested Virtualization enabled
Jan 23 10:44:43 localhost kernel: kvm_amd: Nested Paging enabled
Jan 23 10:44:43 localhost kernel: kvm_amd: LBR virtualization supported
Jan 23 10:44:44 localhost iptables.init[780]: iptables: Applying firewall rules: [  OK  ]
Jan 23 10:44:44 localhost systemd[1]: Finished IPv4 firewall with iptables.
Jan 23 10:44:44 localhost cloud-init[840]: Cloud-init v. 24.4-8.el9 running 'init-local' at Fri, 23 Jan 2026 10:44:44 +0000. Up 6.21 seconds.
Jan 23 10:44:44 localhost kernel: ISO 9660 Extensions: Microsoft Joliet Level 3
Jan 23 10:44:44 localhost kernel: ISO 9660 Extensions: RRIP_1991A
Jan 23 10:44:44 localhost systemd[1]: run-cloud\x2dinit-tmp-tmph3cqvvaz.mount: Deactivated successfully.
Jan 23 10:44:44 localhost systemd[1]: Starting Hostname Service...
Jan 23 10:44:44 localhost systemd[1]: Started Hostname Service.
Jan 23 10:44:44 np0005593388.novalocal systemd-hostnamed[854]: Hostname set to <np0005593388.novalocal> (static)
Jan 23 10:44:44 np0005593388.novalocal systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Jan 23 10:44:44 np0005593388.novalocal systemd[1]: Reached target Preparation for Network.
Jan 23 10:44:44 np0005593388.novalocal systemd[1]: Starting Network Manager...
Jan 23 10:44:44 np0005593388.novalocal NetworkManager[858]: <info>  [1769165084.7737] NetworkManager (version 1.54.3-2.el9) is starting... (boot:99274698-eb02-4e43-8d1b-7c4762b80d7f)
Jan 23 10:44:44 np0005593388.novalocal NetworkManager[858]: <info>  [1769165084.7741] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Jan 23 10:44:44 np0005593388.novalocal NetworkManager[858]: <info>  [1769165084.7824] manager[0x557740b53000]: monitoring kernel firmware directory '/lib/firmware'.
Jan 23 10:44:44 np0005593388.novalocal NetworkManager[858]: <info>  [1769165084.7880] hostname: hostname: using hostnamed
Jan 23 10:44:44 np0005593388.novalocal NetworkManager[858]: <info>  [1769165084.7881] hostname: static hostname changed from (none) to "np0005593388.novalocal"
Jan 23 10:44:44 np0005593388.novalocal NetworkManager[858]: <info>  [1769165084.7887] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Jan 23 10:44:44 np0005593388.novalocal NetworkManager[858]: <info>  [1769165084.7994] manager[0x557740b53000]: rfkill: Wi-Fi hardware radio set enabled
Jan 23 10:44:44 np0005593388.novalocal NetworkManager[858]: <info>  [1769165084.7995] manager[0x557740b53000]: rfkill: WWAN hardware radio set enabled
Jan 23 10:44:44 np0005593388.novalocal systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Jan 23 10:44:44 np0005593388.novalocal NetworkManager[858]: <info>  [1769165084.8036] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Jan 23 10:44:44 np0005593388.novalocal NetworkManager[858]: <info>  [1769165084.8037] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Jan 23 10:44:44 np0005593388.novalocal NetworkManager[858]: <info>  [1769165084.8038] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Jan 23 10:44:44 np0005593388.novalocal NetworkManager[858]: <info>  [1769165084.8038] manager: Networking is enabled by state file
Jan 23 10:44:44 np0005593388.novalocal NetworkManager[858]: <info>  [1769165084.8041] settings: Loaded settings plugin: keyfile (internal)
Jan 23 10:44:44 np0005593388.novalocal NetworkManager[858]: <info>  [1769165084.8106] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Jan 23 10:44:44 np0005593388.novalocal NetworkManager[858]: <info>  [1769165084.8131] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Jan 23 10:44:44 np0005593388.novalocal NetworkManager[858]: <info>  [1769165084.8145] dhcp: init: Using DHCP client 'internal'
Jan 23 10:44:44 np0005593388.novalocal NetworkManager[858]: <info>  [1769165084.8148] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Jan 23 10:44:44 np0005593388.novalocal NetworkManager[858]: <info>  [1769165084.8162] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 23 10:44:44 np0005593388.novalocal NetworkManager[858]: <info>  [1769165084.8170] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Jan 23 10:44:44 np0005593388.novalocal NetworkManager[858]: <info>  [1769165084.8179] device (lo): Activation: starting connection 'lo' (db09884b-81ca-48ab-b981-7d9244c8c055)
Jan 23 10:44:44 np0005593388.novalocal NetworkManager[858]: <info>  [1769165084.8189] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Jan 23 10:44:44 np0005593388.novalocal NetworkManager[858]: <info>  [1769165084.8193] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 23 10:44:44 np0005593388.novalocal NetworkManager[858]: <info>  [1769165084.8224] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Jan 23 10:44:44 np0005593388.novalocal NetworkManager[858]: <info>  [1769165084.8229] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Jan 23 10:44:44 np0005593388.novalocal NetworkManager[858]: <info>  [1769165084.8232] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Jan 23 10:44:44 np0005593388.novalocal NetworkManager[858]: <info>  [1769165084.8234] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Jan 23 10:44:44 np0005593388.novalocal NetworkManager[858]: <info>  [1769165084.8236] device (eth0): carrier: link connected
Jan 23 10:44:44 np0005593388.novalocal NetworkManager[858]: <info>  [1769165084.8239] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Jan 23 10:44:44 np0005593388.novalocal NetworkManager[858]: <info>  [1769165084.8246] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Jan 23 10:44:44 np0005593388.novalocal NetworkManager[858]: <info>  [1769165084.8253] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 23 10:44:44 np0005593388.novalocal NetworkManager[858]: <info>  [1769165084.8258] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 23 10:44:44 np0005593388.novalocal NetworkManager[858]: <info>  [1769165084.8259] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 23 10:44:44 np0005593388.novalocal NetworkManager[858]: <info>  [1769165084.8261] manager: NetworkManager state is now CONNECTING
Jan 23 10:44:44 np0005593388.novalocal NetworkManager[858]: <info>  [1769165084.8263] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 23 10:44:44 np0005593388.novalocal NetworkManager[858]: <info>  [1769165084.8271] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 23 10:44:44 np0005593388.novalocal NetworkManager[858]: <info>  [1769165084.8276] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 23 10:44:44 np0005593388.novalocal NetworkManager[858]: <info>  [1769165084.8314] dhcp4 (eth0): state changed new lease, address=38.102.83.107
Jan 23 10:44:44 np0005593388.novalocal NetworkManager[858]: <info>  [1769165084.8323] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Jan 23 10:44:44 np0005593388.novalocal NetworkManager[858]: <info>  [1769165084.8344] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 23 10:44:44 np0005593388.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 23 10:44:44 np0005593388.novalocal systemd[1]: Started Network Manager.
Jan 23 10:44:44 np0005593388.novalocal systemd[1]: Reached target Network.
Jan 23 10:44:44 np0005593388.novalocal systemd[1]: Starting Network Manager Wait Online...
Jan 23 10:44:44 np0005593388.novalocal systemd[1]: Starting GSSAPI Proxy Daemon...
Jan 23 10:44:44 np0005593388.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 23 10:44:44 np0005593388.novalocal NetworkManager[858]: <info>  [1769165084.8652] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Jan 23 10:44:44 np0005593388.novalocal NetworkManager[858]: <info>  [1769165084.8654] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 23 10:44:44 np0005593388.novalocal NetworkManager[858]: <info>  [1769165084.8655] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Jan 23 10:44:44 np0005593388.novalocal NetworkManager[858]: <info>  [1769165084.8660] device (lo): Activation: successful, device activated.
Jan 23 10:44:44 np0005593388.novalocal NetworkManager[858]: <info>  [1769165084.8665] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 23 10:44:44 np0005593388.novalocal NetworkManager[858]: <info>  [1769165084.8668] manager: NetworkManager state is now CONNECTED_SITE
Jan 23 10:44:44 np0005593388.novalocal NetworkManager[858]: <info>  [1769165084.8670] device (eth0): Activation: successful, device activated.
Jan 23 10:44:44 np0005593388.novalocal NetworkManager[858]: <info>  [1769165084.8675] manager: NetworkManager state is now CONNECTED_GLOBAL
Jan 23 10:44:44 np0005593388.novalocal NetworkManager[858]: <info>  [1769165084.8676] manager: startup complete
Jan 23 10:44:44 np0005593388.novalocal systemd[1]: Started GSSAPI Proxy Daemon.
Jan 23 10:44:44 np0005593388.novalocal systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Jan 23 10:44:44 np0005593388.novalocal systemd[1]: Reached target NFS client services.
Jan 23 10:44:44 np0005593388.novalocal systemd[1]: Reached target Preparation for Remote File Systems.
Jan 23 10:44:44 np0005593388.novalocal systemd[1]: Reached target Remote File Systems.
Jan 23 10:44:44 np0005593388.novalocal systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jan 23 10:44:44 np0005593388.novalocal systemd[1]: Finished Network Manager Wait Online.
Jan 23 10:44:44 np0005593388.novalocal systemd[1]: Starting Cloud-init: Network Stage...
Jan 23 10:44:45 np0005593388.novalocal cloud-init[921]: Cloud-init v. 24.4-8.el9 running 'init' at Fri, 23 Jan 2026 10:44:45 +0000. Up 7.26 seconds.
Jan 23 10:44:45 np0005593388.novalocal cloud-init[921]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Jan 23 10:44:45 np0005593388.novalocal cloud-init[921]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Jan 23 10:44:45 np0005593388.novalocal cloud-init[921]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Jan 23 10:44:45 np0005593388.novalocal cloud-init[921]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Jan 23 10:44:45 np0005593388.novalocal cloud-init[921]: ci-info: |  eth0  | True |        38.102.83.107         | 255.255.255.0 | global | fa:16:3e:4e:df:36 |
Jan 23 10:44:45 np0005593388.novalocal cloud-init[921]: ci-info: |  eth0  | True | fe80::f816:3eff:fe4e:df36/64 |       .       |  link  | fa:16:3e:4e:df:36 |
Jan 23 10:44:45 np0005593388.novalocal cloud-init[921]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Jan 23 10:44:45 np0005593388.novalocal cloud-init[921]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Jan 23 10:44:45 np0005593388.novalocal cloud-init[921]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Jan 23 10:44:45 np0005593388.novalocal cloud-init[921]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Jan 23 10:44:45 np0005593388.novalocal cloud-init[921]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Jan 23 10:44:45 np0005593388.novalocal cloud-init[921]: ci-info: | Route |   Destination   |    Gateway    |     Genmask     | Interface | Flags |
Jan 23 10:44:45 np0005593388.novalocal cloud-init[921]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Jan 23 10:44:45 np0005593388.novalocal cloud-init[921]: ci-info: |   0   |     0.0.0.0     |  38.102.83.1  |     0.0.0.0     |    eth0   |   UG  |
Jan 23 10:44:45 np0005593388.novalocal cloud-init[921]: ci-info: |   1   |   38.102.83.0   |    0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Jan 23 10:44:45 np0005593388.novalocal cloud-init[921]: ci-info: |   2   | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 |    eth0   |  UGH  |
Jan 23 10:44:45 np0005593388.novalocal cloud-init[921]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Jan 23 10:44:45 np0005593388.novalocal cloud-init[921]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Jan 23 10:44:45 np0005593388.novalocal cloud-init[921]: ci-info: +-------+-------------+---------+-----------+-------+
Jan 23 10:44:45 np0005593388.novalocal cloud-init[921]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Jan 23 10:44:45 np0005593388.novalocal cloud-init[921]: ci-info: +-------+-------------+---------+-----------+-------+
Jan 23 10:44:45 np0005593388.novalocal cloud-init[921]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Jan 23 10:44:45 np0005593388.novalocal cloud-init[921]: ci-info: |   3   |  multicast  |    ::   |    eth0   |   U   |
Jan 23 10:44:45 np0005593388.novalocal cloud-init[921]: ci-info: +-------+-------------+---------+-----------+-------+
Jan 23 10:44:46 np0005593388.novalocal useradd[989]: new group: name=cloud-user, GID=1001
Jan 23 10:44:46 np0005593388.novalocal useradd[989]: new user: name=cloud-user, UID=1001, GID=1001, home=/home/cloud-user, shell=/bin/bash, from=none
Jan 23 10:44:46 np0005593388.novalocal useradd[989]: add 'cloud-user' to group 'adm'
Jan 23 10:44:46 np0005593388.novalocal useradd[989]: add 'cloud-user' to group 'systemd-journal'
Jan 23 10:44:46 np0005593388.novalocal useradd[989]: add 'cloud-user' to shadow group 'adm'
Jan 23 10:44:46 np0005593388.novalocal useradd[989]: add 'cloud-user' to shadow group 'systemd-journal'
Jan 23 10:44:46 np0005593388.novalocal cloud-init[921]: Generating public/private rsa key pair.
Jan 23 10:44:46 np0005593388.novalocal cloud-init[921]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Jan 23 10:44:46 np0005593388.novalocal cloud-init[921]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Jan 23 10:44:46 np0005593388.novalocal cloud-init[921]: The key fingerprint is:
Jan 23 10:44:46 np0005593388.novalocal cloud-init[921]: SHA256:X8Rr7SDDOCg7f4VKaZ6RnCOHg9qHJososufgyzhV4Rk root@np0005593388.novalocal
Jan 23 10:44:46 np0005593388.novalocal cloud-init[921]: The key's randomart image is:
Jan 23 10:44:46 np0005593388.novalocal cloud-init[921]: +---[RSA 3072]----+
Jan 23 10:44:46 np0005593388.novalocal cloud-init[921]: |                 |
Jan 23 10:44:46 np0005593388.novalocal cloud-init[921]: |    E      .     |
Jan 23 10:44:46 np0005593388.novalocal cloud-init[921]: |   . +      o    |
Jan 23 10:44:46 np0005593388.novalocal cloud-init[921]: |    +  . o . o   |
Jan 23 10:44:46 np0005593388.novalocal cloud-init[921]: |   o.o.+S.+ = .  |
Jan 23 10:44:46 np0005593388.novalocal cloud-init[921]: |  o +o@ .o.= o   |
Jan 23 10:44:46 np0005593388.novalocal cloud-init[921]: |.+ .oB = ..   .  |
Jan 23 10:44:46 np0005593388.novalocal cloud-init[921]: |%.= .o+ .        |
Jan 23 10:44:46 np0005593388.novalocal cloud-init[921]: |O%o.  ..         |
Jan 23 10:44:46 np0005593388.novalocal cloud-init[921]: +----[SHA256]-----+
Jan 23 10:44:46 np0005593388.novalocal cloud-init[921]: Generating public/private ecdsa key pair.
Jan 23 10:44:46 np0005593388.novalocal cloud-init[921]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Jan 23 10:44:46 np0005593388.novalocal cloud-init[921]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Jan 23 10:44:46 np0005593388.novalocal cloud-init[921]: The key fingerprint is:
Jan 23 10:44:46 np0005593388.novalocal cloud-init[921]: SHA256:nstfQBIX4otA0kv/FATe2qzV2z5fK4KKZ6d6yUK+IU4 root@np0005593388.novalocal
Jan 23 10:44:46 np0005593388.novalocal cloud-init[921]: The key's randomart image is:
Jan 23 10:44:46 np0005593388.novalocal cloud-init[921]: +---[ECDSA 256]---+
Jan 23 10:44:46 np0005593388.novalocal cloud-init[921]: |  ... .o+ o.     |
Jan 23 10:44:46 np0005593388.novalocal cloud-init[921]: |   oo. o.+       |
Jan 23 10:44:46 np0005593388.novalocal cloud-init[921]: |   ..o. +..      |
Jan 23 10:44:46 np0005593388.novalocal cloud-init[921]: |    ...=.=       |
Jan 23 10:44:46 np0005593388.novalocal cloud-init[921]: |      ooS o      |
Jan 23 10:44:46 np0005593388.novalocal cloud-init[921]: |     . +.. +     |
Jan 23 10:44:46 np0005593388.novalocal cloud-init[921]: |   Eo.o + o o   .|
Jan 23 10:44:46 np0005593388.novalocal cloud-init[921]: |  o .ooB + +.. ..|
Jan 23 10:44:46 np0005593388.novalocal cloud-init[921]: |   . +Bo*.. ooo. |
Jan 23 10:44:46 np0005593388.novalocal cloud-init[921]: +----[SHA256]-----+
Jan 23 10:44:46 np0005593388.novalocal cloud-init[921]: Generating public/private ed25519 key pair.
Jan 23 10:44:46 np0005593388.novalocal cloud-init[921]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Jan 23 10:44:46 np0005593388.novalocal cloud-init[921]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Jan 23 10:44:46 np0005593388.novalocal cloud-init[921]: The key fingerprint is:
Jan 23 10:44:46 np0005593388.novalocal cloud-init[921]: SHA256:Sgp++w645dOM1PICnI56V+JXJYAmguaRtzZQKTN7HnQ root@np0005593388.novalocal
Jan 23 10:44:46 np0005593388.novalocal cloud-init[921]: The key's randomart image is:
Jan 23 10:44:46 np0005593388.novalocal cloud-init[921]: +--[ED25519 256]--+
Jan 23 10:44:46 np0005593388.novalocal cloud-init[921]: |. o...           |
Jan 23 10:44:46 np0005593388.novalocal cloud-init[921]: |oO.+oE.          |
Jan 23 10:44:46 np0005593388.novalocal cloud-init[921]: |o.Xoo  .         |
Jan 23 10:44:46 np0005593388.novalocal cloud-init[921]: | o *    . .      |
Jan 23 10:44:46 np0005593388.novalocal cloud-init[921]: | .=.o.. So       |
Jan 23 10:44:46 np0005593388.novalocal cloud-init[921]: | .+++oo..        |
Jan 23 10:44:46 np0005593388.novalocal cloud-init[921]: | oo+=O..         |
Jan 23 10:44:46 np0005593388.novalocal cloud-init[921]: |. o=*o=          |
Jan 23 10:44:46 np0005593388.novalocal cloud-init[921]: |o...o*o          |
Jan 23 10:44:46 np0005593388.novalocal cloud-init[921]: +----[SHA256]-----+
Jan 23 10:44:46 np0005593388.novalocal sm-notify[1005]: Version 2.5.4 starting
Jan 23 10:44:46 np0005593388.novalocal systemd[1]: Finished Cloud-init: Network Stage.
Jan 23 10:44:46 np0005593388.novalocal systemd[1]: Reached target Cloud-config availability.
Jan 23 10:44:46 np0005593388.novalocal systemd[1]: Reached target Network is Online.
Jan 23 10:44:46 np0005593388.novalocal systemd[1]: Starting Cloud-init: Config Stage...
Jan 23 10:44:46 np0005593388.novalocal systemd[1]: Starting Crash recovery kernel arming...
Jan 23 10:44:46 np0005593388.novalocal systemd[1]: Starting Notify NFS peers of a restart...
Jan 23 10:44:46 np0005593388.novalocal systemd[1]: Starting System Logging Service...
Jan 23 10:44:46 np0005593388.novalocal systemd[1]: Starting OpenSSH server daemon...
Jan 23 10:44:46 np0005593388.novalocal systemd[1]: Starting Permit User Sessions...
Jan 23 10:44:46 np0005593388.novalocal systemd[1]: Started Notify NFS peers of a restart.
Jan 23 10:44:46 np0005593388.novalocal systemd[1]: Finished Permit User Sessions.
Jan 23 10:44:46 np0005593388.novalocal sshd[1007]: Server listening on 0.0.0.0 port 22.
Jan 23 10:44:46 np0005593388.novalocal sshd[1007]: Server listening on :: port 22.
Jan 23 10:44:46 np0005593388.novalocal systemd[1]: Started Command Scheduler.
Jan 23 10:44:46 np0005593388.novalocal systemd[1]: Started Getty on tty1.
Jan 23 10:44:46 np0005593388.novalocal crond[1010]: (CRON) STARTUP (1.5.7)
Jan 23 10:44:46 np0005593388.novalocal rsyslogd[1006]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1006" x-info="https://www.rsyslog.com"] start
Jan 23 10:44:46 np0005593388.novalocal crond[1010]: (CRON) INFO (Syslog will be used instead of sendmail.)
Jan 23 10:44:46 np0005593388.novalocal crond[1010]: (CRON) INFO (RANDOM_DELAY will be scaled with factor 9% if used.)
Jan 23 10:44:46 np0005593388.novalocal crond[1010]: (CRON) INFO (running with inotify support)
Jan 23 10:44:46 np0005593388.novalocal rsyslogd[1006]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Jan 23 10:44:46 np0005593388.novalocal systemd[1]: Started Serial Getty on ttyS0.
Jan 23 10:44:46 np0005593388.novalocal systemd[1]: Reached target Login Prompts.
Jan 23 10:44:46 np0005593388.novalocal systemd[1]: Started OpenSSH server daemon.
Jan 23 10:44:46 np0005593388.novalocal systemd[1]: Started System Logging Service.
Jan 23 10:44:46 np0005593388.novalocal systemd[1]: Reached target Multi-User System.
Jan 23 10:44:46 np0005593388.novalocal systemd[1]: Starting Record Runlevel Change in UTMP...
Jan 23 10:44:46 np0005593388.novalocal systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Jan 23 10:44:46 np0005593388.novalocal systemd[1]: Finished Record Runlevel Change in UTMP.
Jan 23 10:44:46 np0005593388.novalocal rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 23 10:44:46 np0005593388.novalocal kdumpctl[1015]: kdump: No kdump initial ramdisk found.
Jan 23 10:44:46 np0005593388.novalocal kdumpctl[1015]: kdump: Rebuilding /boot/initramfs-5.14.0-661.el9.x86_64kdump.img
Jan 23 10:44:46 np0005593388.novalocal sshd-session[1082]: Connection closed by 38.102.83.114 port 49088 [preauth]
Jan 23 10:44:46 np0005593388.novalocal sshd-session[1103]: Unable to negotiate with 38.102.83.114 port 49096: no matching host key type found. Their offer: ssh-ed25519,ssh-ed25519-cert-v01@openssh.com [preauth]
Jan 23 10:44:46 np0005593388.novalocal sshd-session[1109]: Connection reset by 38.102.83.114 port 49102 [preauth]
Jan 23 10:44:46 np0005593388.novalocal sshd-session[1130]: Unable to negotiate with 38.102.83.114 port 49118: no matching host key type found. Their offer: ecdsa-sha2-nistp384,ecdsa-sha2-nistp384-cert-v01@openssh.com [preauth]
Jan 23 10:44:46 np0005593388.novalocal cloud-init[1137]: Cloud-init v. 24.4-8.el9 running 'modules:config' at Fri, 23 Jan 2026 10:44:46 +0000. Up 8.97 seconds.
Jan 23 10:44:46 np0005593388.novalocal sshd-session[1143]: Unable to negotiate with 38.102.83.114 port 49134: no matching host key type found. Their offer: ecdsa-sha2-nistp521,ecdsa-sha2-nistp521-cert-v01@openssh.com [preauth]
Jan 23 10:44:47 np0005593388.novalocal systemd[1]: Finished Cloud-init: Config Stage.
Jan 23 10:44:47 np0005593388.novalocal sshd-session[1150]: Connection reset by 38.102.83.114 port 49136 [preauth]
Jan 23 10:44:47 np0005593388.novalocal systemd[1]: Starting Cloud-init: Final Stage...
Jan 23 10:44:47 np0005593388.novalocal sshd-session[1230]: Unable to negotiate with 38.102.83.114 port 49164: no matching host key type found. Their offer: ssh-rsa,ssh-rsa-cert-v01@openssh.com [preauth]
Jan 23 10:44:47 np0005593388.novalocal sshd-session[1240]: Unable to negotiate with 38.102.83.114 port 49178: no matching host key type found. Their offer: ssh-dss,ssh-dss-cert-v01@openssh.com [preauth]
Jan 23 10:44:47 np0005593388.novalocal sshd-session[1179]: Connection closed by 38.102.83.114 port 49152 [preauth]
Jan 23 10:44:47 np0005593388.novalocal dracut[1285]: dracut-057-102.git20250818.el9
Jan 23 10:44:47 np0005593388.novalocal cloud-init[1303]: Cloud-init v. 24.4-8.el9 running 'modules:final' at Fri, 23 Jan 2026 10:44:47 +0000. Up 9.38 seconds.
Jan 23 10:44:47 np0005593388.novalocal dracut[1287]: Executing: /usr/bin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  --mount "/dev/disk/by-uuid/22ac9141-3960-4912-b20e-19fc8a328d40 /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device --add-confdir /lib/kdump/dracut.conf.d -f /boot/initramfs-5.14.0-661.el9.x86_64kdump.img 5.14.0-661.el9.x86_64
Jan 23 10:44:47 np0005593388.novalocal cloud-init[1332]: #############################################################
Jan 23 10:44:47 np0005593388.novalocal cloud-init[1336]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Jan 23 10:44:47 np0005593388.novalocal cloud-init[1345]: 256 SHA256:nstfQBIX4otA0kv/FATe2qzV2z5fK4KKZ6d6yUK+IU4 root@np0005593388.novalocal (ECDSA)
Jan 23 10:44:47 np0005593388.novalocal cloud-init[1353]: 256 SHA256:Sgp++w645dOM1PICnI56V+JXJYAmguaRtzZQKTN7HnQ root@np0005593388.novalocal (ED25519)
Jan 23 10:44:47 np0005593388.novalocal cloud-init[1360]: 3072 SHA256:X8Rr7SDDOCg7f4VKaZ6RnCOHg9qHJososufgyzhV4Rk root@np0005593388.novalocal (RSA)
Jan 23 10:44:47 np0005593388.novalocal cloud-init[1362]: -----END SSH HOST KEY FINGERPRINTS-----
Jan 23 10:44:47 np0005593388.novalocal cloud-init[1365]: #############################################################
Jan 23 10:44:47 np0005593388.novalocal cloud-init[1303]: Cloud-init v. 24.4-8.el9 finished at Fri, 23 Jan 2026 10:44:47 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 9.54 seconds
Jan 23 10:44:47 np0005593388.novalocal systemd[1]: Finished Cloud-init: Final Stage.
Jan 23 10:44:47 np0005593388.novalocal systemd[1]: Reached target Cloud-init target.
Jan 23 10:44:47 np0005593388.novalocal dracut[1287]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Jan 23 10:44:47 np0005593388.novalocal dracut[1287]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Jan 23 10:44:47 np0005593388.novalocal dracut[1287]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Jan 23 10:44:47 np0005593388.novalocal dracut[1287]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Jan 23 10:44:47 np0005593388.novalocal dracut[1287]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Jan 23 10:44:47 np0005593388.novalocal dracut[1287]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Jan 23 10:44:47 np0005593388.novalocal dracut[1287]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Jan 23 10:44:47 np0005593388.novalocal dracut[1287]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Jan 23 10:44:47 np0005593388.novalocal dracut[1287]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Jan 23 10:44:47 np0005593388.novalocal dracut[1287]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Jan 23 10:44:47 np0005593388.novalocal dracut[1287]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Jan 23 10:44:47 np0005593388.novalocal dracut[1287]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Jan 23 10:44:47 np0005593388.novalocal dracut[1287]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Jan 23 10:44:48 np0005593388.novalocal dracut[1287]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Jan 23 10:44:48 np0005593388.novalocal dracut[1287]: Module 'ifcfg' will not be installed, because it's in the list to be omitted!
Jan 23 10:44:48 np0005593388.novalocal dracut[1287]: Module 'plymouth' will not be installed, because it's in the list to be omitted!
Jan 23 10:44:48 np0005593388.novalocal dracut[1287]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Jan 23 10:44:48 np0005593388.novalocal dracut[1287]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Jan 23 10:44:48 np0005593388.novalocal dracut[1287]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Jan 23 10:44:48 np0005593388.novalocal dracut[1287]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Jan 23 10:44:48 np0005593388.novalocal dracut[1287]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Jan 23 10:44:48 np0005593388.novalocal dracut[1287]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Jan 23 10:44:48 np0005593388.novalocal dracut[1287]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Jan 23 10:44:48 np0005593388.novalocal dracut[1287]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Jan 23 10:44:48 np0005593388.novalocal dracut[1287]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Jan 23 10:44:48 np0005593388.novalocal dracut[1287]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Jan 23 10:44:48 np0005593388.novalocal dracut[1287]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Jan 23 10:44:48 np0005593388.novalocal dracut[1287]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Jan 23 10:44:48 np0005593388.novalocal dracut[1287]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Jan 23 10:44:48 np0005593388.novalocal dracut[1287]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Jan 23 10:44:48 np0005593388.novalocal dracut[1287]: Module 'resume' will not be installed, because it's in the list to be omitted!
Jan 23 10:44:48 np0005593388.novalocal dracut[1287]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Jan 23 10:44:48 np0005593388.novalocal dracut[1287]: Module 'earlykdump' will not be installed, because it's in the list to be omitted!
Jan 23 10:44:48 np0005593388.novalocal dracut[1287]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Jan 23 10:44:48 np0005593388.novalocal dracut[1287]: memstrack is not available
Jan 23 10:44:48 np0005593388.novalocal dracut[1287]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Jan 23 10:44:48 np0005593388.novalocal dracut[1287]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Jan 23 10:44:48 np0005593388.novalocal dracut[1287]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Jan 23 10:44:48 np0005593388.novalocal dracut[1287]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Jan 23 10:44:48 np0005593388.novalocal dracut[1287]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Jan 23 10:44:48 np0005593388.novalocal dracut[1287]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Jan 23 10:44:48 np0005593388.novalocal dracut[1287]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Jan 23 10:44:48 np0005593388.novalocal dracut[1287]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Jan 23 10:44:48 np0005593388.novalocal dracut[1287]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Jan 23 10:44:48 np0005593388.novalocal dracut[1287]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Jan 23 10:44:48 np0005593388.novalocal dracut[1287]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Jan 23 10:44:48 np0005593388.novalocal dracut[1287]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Jan 23 10:44:48 np0005593388.novalocal dracut[1287]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Jan 23 10:44:48 np0005593388.novalocal dracut[1287]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Jan 23 10:44:48 np0005593388.novalocal dracut[1287]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Jan 23 10:44:48 np0005593388.novalocal dracut[1287]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Jan 23 10:44:48 np0005593388.novalocal dracut[1287]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Jan 23 10:44:48 np0005593388.novalocal dracut[1287]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Jan 23 10:44:48 np0005593388.novalocal dracut[1287]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Jan 23 10:44:48 np0005593388.novalocal dracut[1287]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Jan 23 10:44:48 np0005593388.novalocal dracut[1287]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Jan 23 10:44:48 np0005593388.novalocal dracut[1287]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Jan 23 10:44:48 np0005593388.novalocal dracut[1287]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Jan 23 10:44:48 np0005593388.novalocal dracut[1287]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Jan 23 10:44:48 np0005593388.novalocal dracut[1287]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Jan 23 10:44:48 np0005593388.novalocal dracut[1287]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Jan 23 10:44:48 np0005593388.novalocal dracut[1287]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Jan 23 10:44:48 np0005593388.novalocal dracut[1287]: memstrack is not available
Jan 23 10:44:48 np0005593388.novalocal dracut[1287]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Jan 23 10:44:49 np0005593388.novalocal dracut[1287]: *** Including module: systemd ***
Jan 23 10:44:49 np0005593388.novalocal dracut[1287]: *** Including module: fips ***
Jan 23 10:44:49 np0005593388.novalocal dracut[1287]: *** Including module: systemd-initrd ***
Jan 23 10:44:49 np0005593388.novalocal chronyd[792]: Selected source 206.108.0.132 (2.centos.pool.ntp.org)
Jan 23 10:44:49 np0005593388.novalocal chronyd[792]: System clock wrong by 1.002555 seconds
Jan 23 10:44:50 np0005593388.novalocal chronyd[792]: System clock was stepped by 1.002555 seconds
Jan 23 10:44:50 np0005593388.novalocal chronyd[792]: System clock TAI offset set to 37 seconds
Jan 23 10:44:50 np0005593388.novalocal dracut[1287]: *** Including module: i18n ***
Jan 23 10:44:50 np0005593388.novalocal dracut[1287]: *** Including module: drm ***
Jan 23 10:44:51 np0005593388.novalocal dracut[1287]: *** Including module: prefixdevname ***
Jan 23 10:44:51 np0005593388.novalocal dracut[1287]: *** Including module: kernel-modules ***
Jan 23 10:44:51 np0005593388.novalocal kernel: block vda: the capability attribute has been deprecated.
Jan 23 10:44:51 np0005593388.novalocal dracut[1287]: *** Including module: kernel-modules-extra ***
Jan 23 10:44:51 np0005593388.novalocal dracut[1287]:   kernel-modules-extra: configuration source "/run/depmod.d" does not exist
Jan 23 10:44:51 np0005593388.novalocal dracut[1287]:   kernel-modules-extra: configuration source "/lib/depmod.d" does not exist
Jan 23 10:44:51 np0005593388.novalocal dracut[1287]:   kernel-modules-extra: parsing configuration file "/etc/depmod.d/dist.conf"
Jan 23 10:44:51 np0005593388.novalocal dracut[1287]:   kernel-modules-extra: /etc/depmod.d/dist.conf: added "updates extra built-in weak-updates" to the list of search directories
Jan 23 10:44:52 np0005593388.novalocal dracut[1287]: *** Including module: qemu ***
Jan 23 10:44:52 np0005593388.novalocal dracut[1287]: *** Including module: fstab-sys ***
Jan 23 10:44:52 np0005593388.novalocal dracut[1287]: *** Including module: rootfs-block ***
Jan 23 10:44:52 np0005593388.novalocal dracut[1287]: *** Including module: terminfo ***
Jan 23 10:44:52 np0005593388.novalocal dracut[1287]: *** Including module: udev-rules ***
Jan 23 10:44:52 np0005593388.novalocal chronyd[792]: Selected source 167.160.187.179 (2.centos.pool.ntp.org)
Jan 23 10:44:52 np0005593388.novalocal dracut[1287]: Skipping udev rule: 91-permissions.rules
Jan 23 10:44:52 np0005593388.novalocal dracut[1287]: Skipping udev rule: 80-drivers-modprobe.rules
Jan 23 10:44:52 np0005593388.novalocal dracut[1287]: *** Including module: virtiofs ***
Jan 23 10:44:52 np0005593388.novalocal dracut[1287]: *** Including module: dracut-systemd ***
Jan 23 10:44:53 np0005593388.novalocal dracut[1287]: *** Including module: usrmount ***
Jan 23 10:44:53 np0005593388.novalocal dracut[1287]: *** Including module: base ***
Jan 23 10:44:53 np0005593388.novalocal dracut[1287]: *** Including module: fs-lib ***
Jan 23 10:44:53 np0005593388.novalocal dracut[1287]: *** Including module: kdumpbase ***
Jan 23 10:44:53 np0005593388.novalocal dracut[1287]: *** Including module: microcode_ctl-fw_dir_override ***
Jan 23 10:44:53 np0005593388.novalocal dracut[1287]:   microcode_ctl module: mangling fw_dir
Jan 23 10:44:53 np0005593388.novalocal dracut[1287]:     microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware"
Jan 23 10:44:53 np0005593388.novalocal dracut[1287]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Jan 23 10:44:53 np0005593388.novalocal dracut[1287]:     microcode_ctl: configuration "intel" is ignored
Jan 23 10:44:53 np0005593388.novalocal dracut[1287]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Jan 23 10:44:53 np0005593388.novalocal dracut[1287]:     microcode_ctl: configuration "intel-06-2d-07" is ignored
Jan 23 10:44:53 np0005593388.novalocal dracut[1287]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Jan 23 10:44:53 np0005593388.novalocal dracut[1287]:     microcode_ctl: configuration "intel-06-4e-03" is ignored
Jan 23 10:44:53 np0005593388.novalocal dracut[1287]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Jan 23 10:44:53 np0005593388.novalocal dracut[1287]:     microcode_ctl: configuration "intel-06-4f-01" is ignored
Jan 23 10:44:53 np0005593388.novalocal dracut[1287]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Jan 23 10:44:53 np0005593388.novalocal dracut[1287]:     microcode_ctl: configuration "intel-06-55-04" is ignored
Jan 23 10:44:53 np0005593388.novalocal dracut[1287]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Jan 23 10:44:53 np0005593388.novalocal dracut[1287]:     microcode_ctl: configuration "intel-06-5e-03" is ignored
Jan 23 10:44:53 np0005593388.novalocal dracut[1287]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Jan 23 10:44:53 np0005593388.novalocal dracut[1287]:     microcode_ctl: configuration "intel-06-8c-01" is ignored
Jan 23 10:44:54 np0005593388.novalocal dracut[1287]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Jan 23 10:44:54 np0005593388.novalocal dracut[1287]:     microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Jan 23 10:44:54 np0005593388.novalocal dracut[1287]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Jan 23 10:44:54 np0005593388.novalocal dracut[1287]:     microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Jan 23 10:44:54 np0005593388.novalocal dracut[1287]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8f-08"...
Jan 23 10:44:54 np0005593388.novalocal dracut[1287]:     microcode_ctl: configuration "intel-06-8f-08" is ignored
Jan 23 10:44:54 np0005593388.novalocal dracut[1287]:     microcode_ctl: final fw_dir: "/lib/firmware/updates /lib/firmware"
Jan 23 10:44:54 np0005593388.novalocal dracut[1287]: *** Including module: openssl ***
Jan 23 10:44:54 np0005593388.novalocal dracut[1287]: *** Including module: shutdown ***
Jan 23 10:44:54 np0005593388.novalocal dracut[1287]: *** Including module: squash ***
Jan 23 10:44:54 np0005593388.novalocal dracut[1287]: *** Including modules done ***
Jan 23 10:44:54 np0005593388.novalocal dracut[1287]: *** Installing kernel module dependencies ***
Jan 23 10:44:54 np0005593388.novalocal irqbalance[788]: Cannot change IRQ 25 affinity: Operation not permitted
Jan 23 10:44:54 np0005593388.novalocal irqbalance[788]: IRQ 25 affinity is now unmanaged
Jan 23 10:44:54 np0005593388.novalocal irqbalance[788]: Cannot change IRQ 31 affinity: Operation not permitted
Jan 23 10:44:54 np0005593388.novalocal irqbalance[788]: IRQ 31 affinity is now unmanaged
Jan 23 10:44:54 np0005593388.novalocal irqbalance[788]: Cannot change IRQ 28 affinity: Operation not permitted
Jan 23 10:44:54 np0005593388.novalocal irqbalance[788]: IRQ 28 affinity is now unmanaged
Jan 23 10:44:54 np0005593388.novalocal irqbalance[788]: Cannot change IRQ 32 affinity: Operation not permitted
Jan 23 10:44:54 np0005593388.novalocal irqbalance[788]: IRQ 32 affinity is now unmanaged
Jan 23 10:44:54 np0005593388.novalocal irqbalance[788]: Cannot change IRQ 30 affinity: Operation not permitted
Jan 23 10:44:54 np0005593388.novalocal irqbalance[788]: IRQ 30 affinity is now unmanaged
Jan 23 10:44:54 np0005593388.novalocal irqbalance[788]: Cannot change IRQ 29 affinity: Operation not permitted
Jan 23 10:44:54 np0005593388.novalocal irqbalance[788]: IRQ 29 affinity is now unmanaged
Jan 23 10:44:55 np0005593388.novalocal dracut[1287]: *** Installing kernel module dependencies done ***
Jan 23 10:44:55 np0005593388.novalocal dracut[1287]: *** Resolving executable dependencies ***
Jan 23 10:44:56 np0005593388.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 23 10:44:56 np0005593388.novalocal dracut[1287]: *** Resolving executable dependencies done ***
Jan 23 10:44:56 np0005593388.novalocal dracut[1287]: *** Generating early-microcode cpio image ***
Jan 23 10:44:56 np0005593388.novalocal dracut[1287]: *** Store current command line parameters ***
Jan 23 10:44:56 np0005593388.novalocal dracut[1287]: Stored kernel commandline:
Jan 23 10:44:56 np0005593388.novalocal dracut[1287]: No dracut internal kernel commandline stored in the initramfs
Jan 23 10:44:56 np0005593388.novalocal dracut[1287]: *** Install squash loader ***
Jan 23 10:44:57 np0005593388.novalocal dracut[1287]: *** Squashing the files inside the initramfs ***
Jan 23 10:44:58 np0005593388.novalocal dracut[1287]: *** Squashing the files inside the initramfs done ***
Jan 23 10:44:58 np0005593388.novalocal dracut[1287]: *** Creating image file '/boot/initramfs-5.14.0-661.el9.x86_64kdump.img' ***
Jan 23 10:44:58 np0005593388.novalocal dracut[1287]: *** Hardlinking files ***
Jan 23 10:44:58 np0005593388.novalocal dracut[1287]: Mode:           real
Jan 23 10:44:58 np0005593388.novalocal dracut[1287]: Files:          50
Jan 23 10:44:58 np0005593388.novalocal dracut[1287]: Linked:         0 files
Jan 23 10:44:58 np0005593388.novalocal dracut[1287]: Compared:       0 xattrs
Jan 23 10:44:58 np0005593388.novalocal dracut[1287]: Compared:       0 files
Jan 23 10:44:58 np0005593388.novalocal dracut[1287]: Saved:          0 B
Jan 23 10:44:58 np0005593388.novalocal dracut[1287]: Duration:       0.000564 seconds
Jan 23 10:44:58 np0005593388.novalocal dracut[1287]: *** Hardlinking files done ***
Jan 23 10:44:59 np0005593388.novalocal dracut[1287]: *** Creating initramfs image file '/boot/initramfs-5.14.0-661.el9.x86_64kdump.img' done ***
Jan 23 10:45:00 np0005593388.novalocal kdumpctl[1015]: kdump: kexec: loaded kdump kernel
Jan 23 10:45:00 np0005593388.novalocal kdumpctl[1015]: kdump: Starting kdump: [OK]
Jan 23 10:45:00 np0005593388.novalocal systemd[1]: Finished Crash recovery kernel arming.
Jan 23 10:45:00 np0005593388.novalocal systemd[1]: Startup finished in 1.938s (kernel) + 2.404s (initrd) + 17.113s (userspace) = 21.455s.
Jan 23 10:45:08 np0005593388.novalocal sshd-session[4303]: Accepted publickey for zuul from 38.102.83.114 port 38312 ssh2: RSA SHA256:zhs3MiW0JhxzckYcMHQES8SMYHj1iGcomnyzmbiwor8
Jan 23 10:45:08 np0005593388.novalocal systemd[1]: Created slice User Slice of UID 1000.
Jan 23 10:45:08 np0005593388.novalocal systemd[1]: Starting User Runtime Directory /run/user/1000...
Jan 23 10:45:08 np0005593388.novalocal systemd-logind[798]: New session 1 of user zuul.
Jan 23 10:45:08 np0005593388.novalocal systemd[1]: Finished User Runtime Directory /run/user/1000.
Jan 23 10:45:08 np0005593388.novalocal systemd[1]: Starting User Manager for UID 1000...
Jan 23 10:45:08 np0005593388.novalocal systemd[4307]: pam_unix(systemd-user:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 23 10:45:08 np0005593388.novalocal systemd[4307]: Queued start job for default target Main User Target.
Jan 23 10:45:08 np0005593388.novalocal systemd[4307]: Created slice User Application Slice.
Jan 23 10:45:08 np0005593388.novalocal systemd[4307]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 23 10:45:08 np0005593388.novalocal systemd[4307]: Started Daily Cleanup of User's Temporary Directories.
Jan 23 10:45:08 np0005593388.novalocal systemd[4307]: Reached target Paths.
Jan 23 10:45:08 np0005593388.novalocal systemd[4307]: Reached target Timers.
Jan 23 10:45:08 np0005593388.novalocal systemd[4307]: Starting D-Bus User Message Bus Socket...
Jan 23 10:45:08 np0005593388.novalocal systemd[4307]: Starting Create User's Volatile Files and Directories...
Jan 23 10:45:08 np0005593388.novalocal systemd[4307]: Listening on D-Bus User Message Bus Socket.
Jan 23 10:45:08 np0005593388.novalocal systemd[4307]: Reached target Sockets.
Jan 23 10:45:08 np0005593388.novalocal systemd[4307]: Finished Create User's Volatile Files and Directories.
Jan 23 10:45:08 np0005593388.novalocal systemd[4307]: Reached target Basic System.
Jan 23 10:45:08 np0005593388.novalocal systemd[4307]: Reached target Main User Target.
Jan 23 10:45:08 np0005593388.novalocal systemd[4307]: Startup finished in 151ms.
Jan 23 10:45:08 np0005593388.novalocal systemd[1]: Started User Manager for UID 1000.
Jan 23 10:45:08 np0005593388.novalocal systemd[1]: Started Session 1 of User zuul.
Jan 23 10:45:08 np0005593388.novalocal sshd-session[4303]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 23 10:45:09 np0005593388.novalocal python3[4389]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 10:45:11 np0005593388.novalocal python3[4417]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 10:45:15 np0005593388.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 23 10:45:17 np0005593388.novalocal python3[4477]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 10:45:18 np0005593388.novalocal python3[4517]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Jan 23 10:45:19 np0005593388.novalocal python3[4543]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDWLQ2ZyTOQD3fwTBQ91/emlD0NpQ+QwfAbifBmvCAAgCgWOIaCOG01zHsvEHs+r0Am53pN0ShhjTGJW1VXhdO48qmJZrmdcgScOrt2Zf5exVXBhtOlOhJ9tz/z8+tNrrICvgBII4sJU4wcl0ElClWbP3qvm65/8yyMkuLwGLsZM5KhqE581ZTyZx4bSl+fqXc85PWRZ58aNr3GMksA3rQuSXmdCmrDny/tU727+5JkoZHgPcowyILdjNIn2DE41EWJ7ZtMJlmDqNrzMOVynkJQD8FuMhIqzacqdJtfRdZx00kO2L6AaaC3dGqgWaDVUKwXECOf979CjLAEveCZ7/qM+TMdEZfRG1KxhF86kHwidVhvCYsB8qvqo8GKxSXGZ2dLE/kKSG4Ay3aWSPvwfEpTKDkfi12J/7KSBVpLTvcNbJNl01WnOztlgOnM3zE4PY8cBp+/IxmBpglZUzmMBK1mcx14QOPGntQEDK8shw/ToiF2sAz5Q4tpeQN5QctZeyU= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 23 10:45:20 np0005593388.novalocal python3[4567]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:45:20 np0005593388.novalocal python3[4666]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 23 10:45:21 np0005593388.novalocal python3[4737]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769165120.4093854-207-277437001414847/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=68370e09cea04e8fba9ba0ae7597df34_id_rsa follow=False checksum=2b3cdb2161f31c3fb8335359c0ff7d26843e2f00 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:45:21 np0005593388.novalocal python3[4860]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 23 10:45:22 np0005593388.novalocal python3[4931]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769165121.3749988-240-215153159448487/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=68370e09cea04e8fba9ba0ae7597df34_id_rsa.pub follow=False checksum=571dad36541888be856db4c0c8f426fec25298a0 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:45:23 np0005593388.novalocal python3[4979]: ansible-ping Invoked with data=pong
Jan 23 10:45:24 np0005593388.novalocal python3[5003]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 10:45:26 np0005593388.novalocal python3[5061]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Jan 23 10:45:27 np0005593388.novalocal python3[5093]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:45:27 np0005593388.novalocal python3[5117]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:45:28 np0005593388.novalocal python3[5141]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:45:28 np0005593388.novalocal python3[5165]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:45:28 np0005593388.novalocal python3[5189]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:45:29 np0005593388.novalocal python3[5213]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:45:30 np0005593388.novalocal sudo[5237]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-elwluctooccpctvcfhgcjvstahecvtwc ; /usr/bin/python3'
Jan 23 10:45:30 np0005593388.novalocal sudo[5237]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:45:30 np0005593388.novalocal python3[5239]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:45:30 np0005593388.novalocal sudo[5237]: pam_unix(sudo:session): session closed for user root
Jan 23 10:45:30 np0005593388.novalocal sudo[5315]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-txtbvdwwmuagekjogwmeyrtzlqokxwym ; /usr/bin/python3'
Jan 23 10:45:30 np0005593388.novalocal sudo[5315]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:45:31 np0005593388.novalocal python3[5317]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 23 10:45:31 np0005593388.novalocal sudo[5315]: pam_unix(sudo:session): session closed for user root
Jan 23 10:45:31 np0005593388.novalocal sudo[5388]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bjyjsjshaifnohrwaetiqsecbgefmiyx ; /usr/bin/python3'
Jan 23 10:45:31 np0005593388.novalocal sudo[5388]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:45:31 np0005593388.novalocal python3[5390]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1769165130.6545167-21-127963347320824/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:45:31 np0005593388.novalocal sudo[5388]: pam_unix(sudo:session): session closed for user root
Jan 23 10:45:32 np0005593388.novalocal python3[5438]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 23 10:45:32 np0005593388.novalocal python3[5462]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 23 10:45:32 np0005593388.novalocal python3[5486]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 23 10:45:33 np0005593388.novalocal python3[5510]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 23 10:45:33 np0005593388.novalocal python3[5534]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 23 10:45:33 np0005593388.novalocal python3[5558]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 23 10:45:34 np0005593388.novalocal python3[5582]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 23 10:45:34 np0005593388.novalocal python3[5606]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 23 10:45:34 np0005593388.novalocal python3[5630]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 23 10:45:34 np0005593388.novalocal python3[5654]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 23 10:45:35 np0005593388.novalocal python3[5678]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 23 10:45:35 np0005593388.novalocal python3[5702]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 23 10:45:35 np0005593388.novalocal python3[5726]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 23 10:45:36 np0005593388.novalocal python3[5750]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 23 10:45:36 np0005593388.novalocal python3[5774]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 23 10:45:36 np0005593388.novalocal python3[5798]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 23 10:45:37 np0005593388.novalocal python3[5822]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 23 10:45:37 np0005593388.novalocal python3[5846]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 23 10:45:37 np0005593388.novalocal python3[5870]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 23 10:45:38 np0005593388.novalocal python3[5894]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 23 10:45:38 np0005593388.novalocal python3[5918]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 23 10:45:38 np0005593388.novalocal python3[5942]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 23 10:45:38 np0005593388.novalocal python3[5966]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 23 10:45:39 np0005593388.novalocal python3[5990]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 23 10:45:39 np0005593388.novalocal python3[6014]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 23 10:45:39 np0005593388.novalocal python3[6038]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 23 10:45:41 np0005593388.novalocal sudo[6062]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pacvcnstfmxfuvbagzwspwumgccweksw ; /usr/bin/python3'
Jan 23 10:45:41 np0005593388.novalocal sudo[6062]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:45:41 np0005593388.novalocal python3[6064]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Jan 23 10:45:41 np0005593388.novalocal systemd[1]: Starting Time & Date Service...
Jan 23 10:45:41 np0005593388.novalocal systemd[1]: Started Time & Date Service.
Jan 23 10:45:41 np0005593388.novalocal systemd-timedated[6066]: Changed time zone to 'UTC' (UTC).
Jan 23 10:45:41 np0005593388.novalocal sudo[6062]: pam_unix(sudo:session): session closed for user root
Jan 23 10:45:43 np0005593388.novalocal sudo[6093]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qvoyidxmvrgxbfnfhbugavfqeeyvgvmo ; /usr/bin/python3'
Jan 23 10:45:43 np0005593388.novalocal sudo[6093]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:45:43 np0005593388.novalocal python3[6095]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:45:43 np0005593388.novalocal sudo[6093]: pam_unix(sudo:session): session closed for user root
Jan 23 10:45:43 np0005593388.novalocal python3[6171]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 23 10:45:44 np0005593388.novalocal python3[6242]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1769165143.6024685-153-116115802499914/source _original_basename=tmp12wiq461 follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:45:44 np0005593388.novalocal python3[6342]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 23 10:45:45 np0005593388.novalocal python3[6413]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1769165144.5063245-183-29065745970521/source _original_basename=tmpv14ebld6 follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:45:45 np0005593388.novalocal sudo[6513]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ohemzimfqjmyprrzftzaimzdtseqryud ; /usr/bin/python3'
Jan 23 10:45:45 np0005593388.novalocal sudo[6513]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:45:45 np0005593388.novalocal python3[6515]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 23 10:45:45 np0005593388.novalocal sudo[6513]: pam_unix(sudo:session): session closed for user root
Jan 23 10:45:46 np0005593388.novalocal sudo[6586]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qvepezvcpynrcmpztkacxlxrrxswtiti ; /usr/bin/python3'
Jan 23 10:45:46 np0005593388.novalocal sudo[6586]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:45:46 np0005593388.novalocal python3[6588]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1769165145.6153374-231-15699976975919/source _original_basename=tmp7r1wcd35 follow=False checksum=9002ae785196258bce68f82c9276ee1756ef1744 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:45:46 np0005593388.novalocal sudo[6586]: pam_unix(sudo:session): session closed for user root
Jan 23 10:45:46 np0005593388.novalocal python3[6636]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 10:45:47 np0005593388.novalocal python3[6662]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 10:45:47 np0005593388.novalocal sudo[6740]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lqsnfjhxrjqmremdukzlmapxbqqzfcjw ; /usr/bin/python3'
Jan 23 10:45:47 np0005593388.novalocal sudo[6740]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:45:47 np0005593388.novalocal python3[6742]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 23 10:45:47 np0005593388.novalocal sudo[6740]: pam_unix(sudo:session): session closed for user root
Jan 23 10:45:47 np0005593388.novalocal sudo[6813]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tzwrimchtjcnvpejqkrsxmuuugxuxqrr ; /usr/bin/python3'
Jan 23 10:45:47 np0005593388.novalocal sudo[6813]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:45:48 np0005593388.novalocal python3[6815]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1769165147.3241937-273-125939224972055/source _original_basename=tmp7hhvkifx follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:45:48 np0005593388.novalocal sudo[6813]: pam_unix(sudo:session): session closed for user root
Jan 23 10:45:48 np0005593388.novalocal sudo[6864]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ntnhytoayyvrdixqhexcnlsobfqfpura ; /usr/bin/python3'
Jan 23 10:45:48 np0005593388.novalocal sudo[6864]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:45:48 np0005593388.novalocal python3[6866]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163ef9-e89a-2a75-d3cc-00000000001d-1-compute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 10:45:48 np0005593388.novalocal sudo[6864]: pam_unix(sudo:session): session closed for user root
Jan 23 10:45:49 np0005593388.novalocal python3[6894]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env
                                                       _uses_shell=True zuul_log_id=fa163ef9-e89a-2a75-d3cc-00000000001e-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Jan 23 10:45:50 np0005593388.novalocal python3[6922]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:45:56 np0005593388.novalocal chronyd[792]: Selected source 206.108.0.132 (2.centos.pool.ntp.org)
Jan 23 10:46:07 np0005593388.novalocal sudo[6946]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-upjxzllnnenwzlzfugzwyjeiqdyopveh ; /usr/bin/python3'
Jan 23 10:46:07 np0005593388.novalocal sudo[6946]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:46:08 np0005593388.novalocal python3[6948]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:46:08 np0005593388.novalocal sudo[6946]: pam_unix(sudo:session): session closed for user root
Jan 23 10:46:11 np0005593388.novalocal systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 23 10:46:43 np0005593388.novalocal kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Jan 23 10:46:43 np0005593388.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Jan 23 10:46:43 np0005593388.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Jan 23 10:46:43 np0005593388.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Jan 23 10:46:43 np0005593388.novalocal kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Jan 23 10:46:43 np0005593388.novalocal kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Jan 23 10:46:43 np0005593388.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Jan 23 10:46:43 np0005593388.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Jan 23 10:46:43 np0005593388.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Jan 23 10:46:43 np0005593388.novalocal kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Jan 23 10:46:43 np0005593388.novalocal NetworkManager[858]: <info>  [1769165203.2988] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Jan 23 10:46:43 np0005593388.novalocal systemd-udevd[6952]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 10:46:43 np0005593388.novalocal NetworkManager[858]: <info>  [1769165203.3320] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 23 10:46:43 np0005593388.novalocal NetworkManager[858]: <info>  [1769165203.3343] settings: (eth1): created default wired connection 'Wired connection 1'
Jan 23 10:46:43 np0005593388.novalocal NetworkManager[858]: <info>  [1769165203.3345] device (eth1): carrier: link connected
Jan 23 10:46:43 np0005593388.novalocal NetworkManager[858]: <info>  [1769165203.3347] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Jan 23 10:46:43 np0005593388.novalocal NetworkManager[858]: <info>  [1769165203.3352] policy: auto-activating connection 'Wired connection 1' (15b44493-7c24-36e8-977a-5ba5b78aa3d2)
Jan 23 10:46:43 np0005593388.novalocal NetworkManager[858]: <info>  [1769165203.3356] device (eth1): Activation: starting connection 'Wired connection 1' (15b44493-7c24-36e8-977a-5ba5b78aa3d2)
Jan 23 10:46:43 np0005593388.novalocal NetworkManager[858]: <info>  [1769165203.3357] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 23 10:46:43 np0005593388.novalocal NetworkManager[858]: <info>  [1769165203.3359] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 23 10:46:43 np0005593388.novalocal NetworkManager[858]: <info>  [1769165203.3363] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 23 10:46:43 np0005593388.novalocal NetworkManager[858]: <info>  [1769165203.3366] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Jan 23 10:46:44 np0005593388.novalocal python3[6978]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163ef9-e89a-a710-7028-0000000000fc-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 10:46:51 np0005593388.novalocal sudo[7056]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qyzkqjhxfquxruurmbebqqqnysfzopez ; OS_CLOUD=vexxhost /usr/bin/python3'
Jan 23 10:46:51 np0005593388.novalocal sudo[7056]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:46:51 np0005593388.novalocal python3[7058]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 23 10:46:51 np0005593388.novalocal sudo[7056]: pam_unix(sudo:session): session closed for user root
Jan 23 10:46:51 np0005593388.novalocal sudo[7129]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tmfvxakiyoongajynwjuiorkxyzemfaw ; OS_CLOUD=vexxhost /usr/bin/python3'
Jan 23 10:46:51 np0005593388.novalocal sudo[7129]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:46:52 np0005593388.novalocal python3[7131]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769165211.219446-102-83882430817851/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=36d3725630a80f24fe3bcbdcb1558d6993d272d7 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:46:52 np0005593388.novalocal sudo[7129]: pam_unix(sudo:session): session closed for user root
Jan 23 10:46:52 np0005593388.novalocal sudo[7179]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kzkbsojrdfhooyrcrmiqeiqlljwadgxh ; OS_CLOUD=vexxhost /usr/bin/python3'
Jan 23 10:46:52 np0005593388.novalocal sudo[7179]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:46:52 np0005593388.novalocal python3[7181]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 23 10:46:52 np0005593388.novalocal systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Jan 23 10:46:52 np0005593388.novalocal systemd[1]: Stopped Network Manager Wait Online.
Jan 23 10:46:52 np0005593388.novalocal systemd[1]: Stopping Network Manager Wait Online...
Jan 23 10:46:52 np0005593388.novalocal NetworkManager[858]: <info>  [1769165212.9343] caught SIGTERM, shutting down normally.
Jan 23 10:46:52 np0005593388.novalocal systemd[1]: Stopping Network Manager...
Jan 23 10:46:52 np0005593388.novalocal NetworkManager[858]: <info>  [1769165212.9357] dhcp4 (eth0): canceled DHCP transaction
Jan 23 10:46:52 np0005593388.novalocal NetworkManager[858]: <info>  [1769165212.9357] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 23 10:46:52 np0005593388.novalocal NetworkManager[858]: <info>  [1769165212.9357] dhcp4 (eth0): state changed no lease
Jan 23 10:46:52 np0005593388.novalocal NetworkManager[858]: <info>  [1769165212.9360] manager: NetworkManager state is now CONNECTING
Jan 23 10:46:52 np0005593388.novalocal NetworkManager[858]: <info>  [1769165212.9493] dhcp4 (eth1): canceled DHCP transaction
Jan 23 10:46:52 np0005593388.novalocal NetworkManager[858]: <info>  [1769165212.9494] dhcp4 (eth1): state changed no lease
Jan 23 10:46:52 np0005593388.novalocal NetworkManager[858]: <info>  [1769165212.9572] exiting (success)
Jan 23 10:46:52 np0005593388.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 23 10:46:52 np0005593388.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 23 10:46:52 np0005593388.novalocal systemd[1]: NetworkManager.service: Deactivated successfully.
Jan 23 10:46:52 np0005593388.novalocal systemd[1]: Stopped Network Manager.
Jan 23 10:46:52 np0005593388.novalocal systemd[1]: Starting Network Manager...
Jan 23 10:46:53 np0005593388.novalocal NetworkManager[7190]: <info>  [1769165213.0460] NetworkManager (version 1.54.3-2.el9) is starting... (after a restart, boot:99274698-eb02-4e43-8d1b-7c4762b80d7f)
Jan 23 10:46:53 np0005593388.novalocal NetworkManager[7190]: <info>  [1769165213.0464] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Jan 23 10:46:53 np0005593388.novalocal NetworkManager[7190]: <info>  [1769165213.0541] manager[0x5574b07ea000]: monitoring kernel firmware directory '/lib/firmware'.
Jan 23 10:46:53 np0005593388.novalocal systemd[1]: Starting Hostname Service...
Jan 23 10:46:53 np0005593388.novalocal systemd[1]: Started Hostname Service.
Jan 23 10:46:53 np0005593388.novalocal NetworkManager[7190]: <info>  [1769165213.1597] hostname: hostname: using hostnamed
Jan 23 10:46:53 np0005593388.novalocal NetworkManager[7190]: <info>  [1769165213.1598] hostname: static hostname changed from (none) to "np0005593388.novalocal"
Jan 23 10:46:53 np0005593388.novalocal NetworkManager[7190]: <info>  [1769165213.1603] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Jan 23 10:46:53 np0005593388.novalocal NetworkManager[7190]: <info>  [1769165213.1607] manager[0x5574b07ea000]: rfkill: Wi-Fi hardware radio set enabled
Jan 23 10:46:53 np0005593388.novalocal NetworkManager[7190]: <info>  [1769165213.1607] manager[0x5574b07ea000]: rfkill: WWAN hardware radio set enabled
Jan 23 10:46:53 np0005593388.novalocal NetworkManager[7190]: <info>  [1769165213.1631] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Jan 23 10:46:53 np0005593388.novalocal NetworkManager[7190]: <info>  [1769165213.1631] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Jan 23 10:46:53 np0005593388.novalocal NetworkManager[7190]: <info>  [1769165213.1632] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Jan 23 10:46:53 np0005593388.novalocal NetworkManager[7190]: <info>  [1769165213.1632] manager: Networking is enabled by state file
Jan 23 10:46:53 np0005593388.novalocal NetworkManager[7190]: <info>  [1769165213.1635] settings: Loaded settings plugin: keyfile (internal)
Jan 23 10:46:53 np0005593388.novalocal NetworkManager[7190]: <info>  [1769165213.1638] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Jan 23 10:46:53 np0005593388.novalocal NetworkManager[7190]: <info>  [1769165213.1657] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Jan 23 10:46:53 np0005593388.novalocal NetworkManager[7190]: <info>  [1769165213.1665] dhcp: init: Using DHCP client 'internal'
Jan 23 10:46:53 np0005593388.novalocal NetworkManager[7190]: <info>  [1769165213.1668] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Jan 23 10:46:53 np0005593388.novalocal NetworkManager[7190]: <info>  [1769165213.1673] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 23 10:46:53 np0005593388.novalocal NetworkManager[7190]: <info>  [1769165213.1677] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Jan 23 10:46:53 np0005593388.novalocal NetworkManager[7190]: <info>  [1769165213.1685] device (lo): Activation: starting connection 'lo' (db09884b-81ca-48ab-b981-7d9244c8c055)
Jan 23 10:46:53 np0005593388.novalocal NetworkManager[7190]: <info>  [1769165213.1691] device (eth0): carrier: link connected
Jan 23 10:46:53 np0005593388.novalocal NetworkManager[7190]: <info>  [1769165213.1695] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Jan 23 10:46:53 np0005593388.novalocal NetworkManager[7190]: <info>  [1769165213.1699] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Jan 23 10:46:53 np0005593388.novalocal NetworkManager[7190]: <info>  [1769165213.1700] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 23 10:46:53 np0005593388.novalocal NetworkManager[7190]: <info>  [1769165213.1705] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 23 10:46:53 np0005593388.novalocal NetworkManager[7190]: <info>  [1769165213.1711] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 23 10:46:53 np0005593388.novalocal NetworkManager[7190]: <info>  [1769165213.1717] device (eth1): carrier: link connected
Jan 23 10:46:53 np0005593388.novalocal NetworkManager[7190]: <info>  [1769165213.1721] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Jan 23 10:46:53 np0005593388.novalocal NetworkManager[7190]: <info>  [1769165213.1726] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (15b44493-7c24-36e8-977a-5ba5b78aa3d2) (indicated)
Jan 23 10:46:53 np0005593388.novalocal NetworkManager[7190]: <info>  [1769165213.1726] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 23 10:46:53 np0005593388.novalocal NetworkManager[7190]: <info>  [1769165213.1731] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 23 10:46:53 np0005593388.novalocal NetworkManager[7190]: <info>  [1769165213.1736] device (eth1): Activation: starting connection 'Wired connection 1' (15b44493-7c24-36e8-977a-5ba5b78aa3d2)
Jan 23 10:46:53 np0005593388.novalocal systemd[1]: Started Network Manager.
Jan 23 10:46:53 np0005593388.novalocal NetworkManager[7190]: <info>  [1769165213.1763] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Jan 23 10:46:53 np0005593388.novalocal NetworkManager[7190]: <info>  [1769165213.1775] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Jan 23 10:46:53 np0005593388.novalocal NetworkManager[7190]: <info>  [1769165213.1777] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Jan 23 10:46:53 np0005593388.novalocal NetworkManager[7190]: <info>  [1769165213.1779] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Jan 23 10:46:53 np0005593388.novalocal NetworkManager[7190]: <info>  [1769165213.1780] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 23 10:46:53 np0005593388.novalocal NetworkManager[7190]: <info>  [1769165213.1782] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 23 10:46:53 np0005593388.novalocal NetworkManager[7190]: <info>  [1769165213.1785] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 23 10:46:53 np0005593388.novalocal NetworkManager[7190]: <info>  [1769165213.1786] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 23 10:46:53 np0005593388.novalocal NetworkManager[7190]: <info>  [1769165213.1789] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Jan 23 10:46:53 np0005593388.novalocal NetworkManager[7190]: <info>  [1769165213.1795] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 23 10:46:53 np0005593388.novalocal NetworkManager[7190]: <info>  [1769165213.1798] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 23 10:46:53 np0005593388.novalocal NetworkManager[7190]: <info>  [1769165213.1805] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 23 10:46:53 np0005593388.novalocal NetworkManager[7190]: <info>  [1769165213.1807] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Jan 23 10:46:53 np0005593388.novalocal NetworkManager[7190]: <info>  [1769165213.1825] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Jan 23 10:46:53 np0005593388.novalocal NetworkManager[7190]: <info>  [1769165213.1826] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Jan 23 10:46:53 np0005593388.novalocal NetworkManager[7190]: <info>  [1769165213.1830] device (lo): Activation: successful, device activated.
Jan 23 10:46:53 np0005593388.novalocal NetworkManager[7190]: <info>  [1769165213.1836] dhcp4 (eth0): state changed new lease, address=38.102.83.107
Jan 23 10:46:53 np0005593388.novalocal NetworkManager[7190]: <info>  [1769165213.1843] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Jan 23 10:46:53 np0005593388.novalocal systemd[1]: Starting Network Manager Wait Online...
Jan 23 10:46:53 np0005593388.novalocal NetworkManager[7190]: <info>  [1769165213.1916] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 23 10:46:53 np0005593388.novalocal NetworkManager[7190]: <info>  [1769165213.1946] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 23 10:46:53 np0005593388.novalocal NetworkManager[7190]: <info>  [1769165213.1948] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 23 10:46:53 np0005593388.novalocal NetworkManager[7190]: <info>  [1769165213.1951] manager: NetworkManager state is now CONNECTED_SITE
Jan 23 10:46:53 np0005593388.novalocal NetworkManager[7190]: <info>  [1769165213.1954] device (eth0): Activation: successful, device activated.
Jan 23 10:46:53 np0005593388.novalocal NetworkManager[7190]: <info>  [1769165213.1959] manager: NetworkManager state is now CONNECTED_GLOBAL
Jan 23 10:46:53 np0005593388.novalocal sudo[7179]: pam_unix(sudo:session): session closed for user root
Jan 23 10:46:53 np0005593388.novalocal python3[7266]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163ef9-e89a-a710-7028-0000000000a7-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 10:47:03 np0005593388.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 23 10:47:23 np0005593388.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 23 10:47:38 np0005593388.novalocal NetworkManager[7190]: <info>  [1769165258.9306] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 23 10:47:38 np0005593388.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 23 10:47:38 np0005593388.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 23 10:47:38 np0005593388.novalocal NetworkManager[7190]: <info>  [1769165258.9685] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 23 10:47:38 np0005593388.novalocal NetworkManager[7190]: <info>  [1769165258.9689] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 23 10:47:38 np0005593388.novalocal NetworkManager[7190]: <info>  [1769165258.9700] device (eth1): Activation: successful, device activated.
Jan 23 10:47:38 np0005593388.novalocal NetworkManager[7190]: <info>  [1769165258.9712] manager: startup complete
Jan 23 10:47:38 np0005593388.novalocal NetworkManager[7190]: <info>  [1769165258.9716] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Jan 23 10:47:38 np0005593388.novalocal NetworkManager[7190]: <warn>  [1769165258.9724] device (eth1): Activation: failed for connection 'Wired connection 1'
Jan 23 10:47:38 np0005593388.novalocal NetworkManager[7190]: <info>  [1769165258.9741] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Jan 23 10:47:38 np0005593388.novalocal systemd[1]: Finished Network Manager Wait Online.
Jan 23 10:47:38 np0005593388.novalocal NetworkManager[7190]: <info>  [1769165258.9893] dhcp4 (eth1): canceled DHCP transaction
Jan 23 10:47:38 np0005593388.novalocal NetworkManager[7190]: <info>  [1769165258.9895] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Jan 23 10:47:38 np0005593388.novalocal NetworkManager[7190]: <info>  [1769165258.9897] dhcp4 (eth1): state changed no lease
Jan 23 10:47:38 np0005593388.novalocal NetworkManager[7190]: <info>  [1769165258.9928] policy: auto-activating connection 'ci-private-network' (b5bacad9-2802-5eea-a61e-31e1674ecfc2)
Jan 23 10:47:38 np0005593388.novalocal NetworkManager[7190]: <info>  [1769165258.9938] device (eth1): Activation: starting connection 'ci-private-network' (b5bacad9-2802-5eea-a61e-31e1674ecfc2)
Jan 23 10:47:38 np0005593388.novalocal NetworkManager[7190]: <info>  [1769165258.9941] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 23 10:47:38 np0005593388.novalocal NetworkManager[7190]: <info>  [1769165258.9947] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 23 10:47:38 np0005593388.novalocal NetworkManager[7190]: <info>  [1769165258.9959] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 23 10:47:38 np0005593388.novalocal NetworkManager[7190]: <info>  [1769165258.9974] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 23 10:47:39 np0005593388.novalocal NetworkManager[7190]: <info>  [1769165259.0239] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 23 10:47:39 np0005593388.novalocal NetworkManager[7190]: <info>  [1769165259.0259] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 23 10:47:39 np0005593388.novalocal NetworkManager[7190]: <info>  [1769165259.0268] device (eth1): Activation: successful, device activated.
Jan 23 10:47:49 np0005593388.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 23 10:47:52 np0005593388.novalocal systemd[4307]: Starting Mark boot as successful...
Jan 23 10:47:52 np0005593388.novalocal systemd[4307]: Finished Mark boot as successful.
Jan 23 10:47:53 np0005593388.novalocal sshd-session[4316]: Received disconnect from 38.102.83.114 port 38312:11: disconnected by user
Jan 23 10:47:53 np0005593388.novalocal sshd-session[4316]: Disconnected from user zuul 38.102.83.114 port 38312
Jan 23 10:47:53 np0005593388.novalocal sshd-session[4303]: pam_unix(sshd:session): session closed for user zuul
Jan 23 10:47:53 np0005593388.novalocal systemd-logind[798]: Session 1 logged out. Waiting for processes to exit.
Jan 23 10:47:57 np0005593388.novalocal sshd-session[7295]: Accepted publickey for zuul from 38.102.83.114 port 40946 ssh2: RSA SHA256:l5/z7/B1LZInfKNQYpI40S/PX6fnGwoDdxTfZ/2+PpU
Jan 23 10:47:57 np0005593388.novalocal systemd-logind[798]: New session 3 of user zuul.
Jan 23 10:47:57 np0005593388.novalocal systemd[1]: Started Session 3 of User zuul.
Jan 23 10:47:57 np0005593388.novalocal sshd-session[7295]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 23 10:47:57 np0005593388.novalocal sudo[7374]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hhvmmzvtoczbbqnlftbkkookcatmoqab ; OS_CLOUD=vexxhost /usr/bin/python3'
Jan 23 10:47:57 np0005593388.novalocal sudo[7374]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:47:57 np0005593388.novalocal python3[7376]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 23 10:47:57 np0005593388.novalocal sudo[7374]: pam_unix(sudo:session): session closed for user root
Jan 23 10:47:57 np0005593388.novalocal sudo[7447]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-euqtrmzapybmsfhggqnejpbzaztqnbjv ; OS_CLOUD=vexxhost /usr/bin/python3'
Jan 23 10:47:57 np0005593388.novalocal sudo[7447]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:47:58 np0005593388.novalocal python3[7449]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769165277.3198957-259-88066096679770/source _original_basename=tmpw0br789x follow=False checksum=f5ce5d2491e8cb1af1e3765e84d705185f3cc9ff backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:47:58 np0005593388.novalocal sudo[7447]: pam_unix(sudo:session): session closed for user root
Jan 23 10:48:00 np0005593388.novalocal sshd-session[7298]: Connection closed by 38.102.83.114 port 40946
Jan 23 10:48:00 np0005593388.novalocal sshd-session[7295]: pam_unix(sshd:session): session closed for user zuul
Jan 23 10:48:00 np0005593388.novalocal systemd[1]: session-3.scope: Deactivated successfully.
Jan 23 10:48:00 np0005593388.novalocal systemd-logind[798]: Session 3 logged out. Waiting for processes to exit.
Jan 23 10:48:00 np0005593388.novalocal systemd-logind[798]: Removed session 3.
Jan 23 10:50:52 np0005593388.novalocal systemd[4307]: Created slice User Background Tasks Slice.
Jan 23 10:50:52 np0005593388.novalocal systemd[4307]: Starting Cleanup of User's Temporary Files and Directories...
Jan 23 10:50:52 np0005593388.novalocal systemd[4307]: Finished Cleanup of User's Temporary Files and Directories.
Jan 23 10:54:44 np0005593388.novalocal sshd-session[7479]: Accepted publickey for zuul from 38.102.83.114 port 35554 ssh2: RSA SHA256:l5/z7/B1LZInfKNQYpI40S/PX6fnGwoDdxTfZ/2+PpU
Jan 23 10:54:44 np0005593388.novalocal systemd-logind[798]: New session 4 of user zuul.
Jan 23 10:54:44 np0005593388.novalocal systemd[1]: Started Session 4 of User zuul.
Jan 23 10:54:44 np0005593388.novalocal sshd-session[7479]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 23 10:54:44 np0005593388.novalocal sudo[7506]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-spqdmubiunkkltpsxdsazrkdrokeevlv ; /usr/bin/python3'
Jan 23 10:54:44 np0005593388.novalocal sudo[7506]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:54:45 np0005593388.novalocal python3[7508]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda
                                                       _uses_shell=True zuul_log_id=fa163ef9-e89a-e89b-4aee-00000000217f-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 10:54:45 np0005593388.novalocal sudo[7506]: pam_unix(sudo:session): session closed for user root
Jan 23 10:54:45 np0005593388.novalocal sudo[7535]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mirzloqciepsxdyyifrsjsrhuykjeuqp ; /usr/bin/python3'
Jan 23 10:54:45 np0005593388.novalocal sudo[7535]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:54:45 np0005593388.novalocal python3[7537]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:54:45 np0005593388.novalocal sudo[7535]: pam_unix(sudo:session): session closed for user root
Jan 23 10:54:45 np0005593388.novalocal sudo[7561]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dguippyzfprmuicyvbbcphwwwpujgjnp ; /usr/bin/python3'
Jan 23 10:54:45 np0005593388.novalocal sudo[7561]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:54:45 np0005593388.novalocal python3[7563]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:54:45 np0005593388.novalocal sudo[7561]: pam_unix(sudo:session): session closed for user root
Jan 23 10:54:45 np0005593388.novalocal sudo[7587]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hwlgvlnplwtemhnbreebozlmolltqihe ; /usr/bin/python3'
Jan 23 10:54:45 np0005593388.novalocal sudo[7587]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:54:46 np0005593388.novalocal python3[7589]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:54:46 np0005593388.novalocal sudo[7587]: pam_unix(sudo:session): session closed for user root
Jan 23 10:54:46 np0005593388.novalocal sudo[7613]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhoipmhamebnmaqbwajojthdwwxuxika ; /usr/bin/python3'
Jan 23 10:54:46 np0005593388.novalocal sudo[7613]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:54:46 np0005593388.novalocal python3[7615]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:54:46 np0005593388.novalocal sudo[7613]: pam_unix(sudo:session): session closed for user root
Jan 23 10:54:46 np0005593388.novalocal sudo[7639]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dyatfbghxbwhlodlssnrjxbywapfwyug ; /usr/bin/python3'
Jan 23 10:54:46 np0005593388.novalocal sudo[7639]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:54:47 np0005593388.novalocal python3[7641]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:54:47 np0005593388.novalocal sudo[7639]: pam_unix(sudo:session): session closed for user root
Jan 23 10:54:47 np0005593388.novalocal sudo[7717]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dwyabgjgcblxbgmrlogrwacngshptruq ; /usr/bin/python3'
Jan 23 10:54:47 np0005593388.novalocal sudo[7717]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:54:47 np0005593388.novalocal python3[7719]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 23 10:54:47 np0005593388.novalocal sudo[7717]: pam_unix(sudo:session): session closed for user root
Jan 23 10:54:47 np0005593388.novalocal sudo[7790]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oasbhiddarcmshsyhshbcofjjrcamlak ; /usr/bin/python3'
Jan 23 10:54:47 np0005593388.novalocal sudo[7790]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:54:48 np0005593388.novalocal python3[7792]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769165687.3815713-510-72399738513855/source _original_basename=tmp4f6cglwr follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:54:48 np0005593388.novalocal sudo[7790]: pam_unix(sudo:session): session closed for user root
Jan 23 10:54:48 np0005593388.novalocal sudo[7840]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ypayxrbrxzajfzylouediunnebqqiofy ; /usr/bin/python3'
Jan 23 10:54:48 np0005593388.novalocal sudo[7840]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:54:49 np0005593388.novalocal python3[7842]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 23 10:54:49 np0005593388.novalocal systemd[1]: Reloading.
Jan 23 10:54:49 np0005593388.novalocal systemd-rc-local-generator[7859]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 10:54:49 np0005593388.novalocal sudo[7840]: pam_unix(sudo:session): session closed for user root
Jan 23 10:54:50 np0005593388.novalocal sudo[7895]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-trnuhsmrwfbcrskseavpuuqjfmbclyhp ; /usr/bin/python3'
Jan 23 10:54:50 np0005593388.novalocal sudo[7895]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:54:50 np0005593388.novalocal python3[7897]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Jan 23 10:54:50 np0005593388.novalocal sudo[7895]: pam_unix(sudo:session): session closed for user root
Jan 23 10:54:50 np0005593388.novalocal sudo[7921]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qiebkdrgyoleeqbjpnbyfzlbdzrjilsp ; /usr/bin/python3'
Jan 23 10:54:50 np0005593388.novalocal sudo[7921]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:54:50 np0005593388.novalocal python3[7923]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 10:54:50 np0005593388.novalocal sudo[7921]: pam_unix(sudo:session): session closed for user root
Jan 23 10:54:51 np0005593388.novalocal sudo[7949]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ckdabduevtvgmiuowpnfmzkvgymoytav ; /usr/bin/python3'
Jan 23 10:54:51 np0005593388.novalocal sudo[7949]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:54:51 np0005593388.novalocal python3[7951]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 10:54:51 np0005593388.novalocal sudo[7949]: pam_unix(sudo:session): session closed for user root
Jan 23 10:54:51 np0005593388.novalocal sudo[7977]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-necbwgzphefmijtgbhfdytghhmwwmjsg ; /usr/bin/python3'
Jan 23 10:54:51 np0005593388.novalocal sudo[7977]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:54:51 np0005593388.novalocal python3[7979]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 10:54:51 np0005593388.novalocal sudo[7977]: pam_unix(sudo:session): session closed for user root
Jan 23 10:54:51 np0005593388.novalocal sudo[8005]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aaxkvzjhcqwcgmaordsljnylujlayyma ; /usr/bin/python3'
Jan 23 10:54:51 np0005593388.novalocal sudo[8005]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:54:51 np0005593388.novalocal python3[8007]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 10:54:51 np0005593388.novalocal sudo[8005]: pam_unix(sudo:session): session closed for user root
Jan 23 10:54:52 np0005593388.novalocal python3[8034]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;
                                                       _uses_shell=True zuul_log_id=fa163ef9-e89a-e89b-4aee-000000002186-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 10:54:53 np0005593388.novalocal python3[8064]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 23 10:54:54 np0005593388.novalocal sshd-session[7482]: Connection closed by 38.102.83.114 port 35554
Jan 23 10:54:54 np0005593388.novalocal sshd-session[7479]: pam_unix(sshd:session): session closed for user zuul
Jan 23 10:54:54 np0005593388.novalocal systemd[1]: session-4.scope: Deactivated successfully.
Jan 23 10:54:54 np0005593388.novalocal systemd[1]: session-4.scope: Consumed 3.858s CPU time.
Jan 23 10:54:54 np0005593388.novalocal systemd-logind[798]: Session 4 logged out. Waiting for processes to exit.
Jan 23 10:54:54 np0005593388.novalocal systemd-logind[798]: Removed session 4.
Jan 23 10:54:56 np0005593388.novalocal sshd-session[8072]: Accepted publickey for zuul from 38.102.83.114 port 54800 ssh2: RSA SHA256:l5/z7/B1LZInfKNQYpI40S/PX6fnGwoDdxTfZ/2+PpU
Jan 23 10:54:56 np0005593388.novalocal systemd-logind[798]: New session 5 of user zuul.
Jan 23 10:54:56 np0005593388.novalocal systemd[1]: Started Session 5 of User zuul.
Jan 23 10:54:56 np0005593388.novalocal sshd-session[8072]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 23 10:54:56 np0005593388.novalocal sudo[8099]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pduyjkgcluprxayznohhzmyybimpzfbh ; /usr/bin/python3'
Jan 23 10:54:56 np0005593388.novalocal sudo[8099]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:54:56 np0005593388.novalocal python3[8101]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 23 10:55:02 np0005593388.novalocal setsebool[8139]: The virt_use_nfs policy boolean was changed to 1 by root
Jan 23 10:55:02 np0005593388.novalocal setsebool[8139]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Jan 23 10:55:04 np0005593388.novalocal irqbalance[788]: Cannot change IRQ 27 affinity: Operation not permitted
Jan 23 10:55:04 np0005593388.novalocal irqbalance[788]: IRQ 27 affinity is now unmanaged
Jan 23 10:55:15 np0005593388.novalocal kernel: SELinux:  Converting 385 SID table entries...
Jan 23 10:55:15 np0005593388.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Jan 23 10:55:15 np0005593388.novalocal kernel: SELinux:  policy capability open_perms=1
Jan 23 10:55:15 np0005593388.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Jan 23 10:55:15 np0005593388.novalocal kernel: SELinux:  policy capability always_check_network=0
Jan 23 10:55:15 np0005593388.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 23 10:55:15 np0005593388.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 23 10:55:15 np0005593388.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 23 10:55:24 np0005593388.novalocal kernel: SELinux:  Converting 388 SID table entries...
Jan 23 10:55:24 np0005593388.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Jan 23 10:55:24 np0005593388.novalocal kernel: SELinux:  policy capability open_perms=1
Jan 23 10:55:24 np0005593388.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Jan 23 10:55:24 np0005593388.novalocal kernel: SELinux:  policy capability always_check_network=0
Jan 23 10:55:24 np0005593388.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 23 10:55:24 np0005593388.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 23 10:55:24 np0005593388.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 23 10:55:42 np0005593388.novalocal dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=4 res=1
Jan 23 10:55:42 np0005593388.novalocal systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 23 10:55:42 np0005593388.novalocal systemd[1]: Starting man-db-cache-update.service...
Jan 23 10:55:42 np0005593388.novalocal systemd[1]: Reloading.
Jan 23 10:55:42 np0005593388.novalocal systemd-rc-local-generator[8910]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 10:55:43 np0005593388.novalocal systemd[1]: Queuing reload/restart jobs for marked units…
Jan 23 10:55:44 np0005593388.novalocal sudo[8099]: pam_unix(sudo:session): session closed for user root
Jan 23 10:55:55 np0005593388.novalocal python3[17036]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"
                                                        _uses_shell=True zuul_log_id=fa163ef9-e89a-216c-f806-00000000000a-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 10:55:56 np0005593388.novalocal kernel: evm: overlay not supported
Jan 23 10:55:56 np0005593388.novalocal systemd[4307]: Starting D-Bus User Message Bus...
Jan 23 10:55:56 np0005593388.novalocal dbus-broker-launch[17491]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Jan 23 10:55:56 np0005593388.novalocal dbus-broker-launch[17491]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Jan 23 10:55:56 np0005593388.novalocal systemd[4307]: Started D-Bus User Message Bus.
Jan 23 10:55:56 np0005593388.novalocal dbus-broker-lau[17491]: Ready
Jan 23 10:55:56 np0005593388.novalocal systemd[4307]: selinux: avc:  op=load_policy lsm=selinux seqno=4 res=1
Jan 23 10:55:56 np0005593388.novalocal systemd[4307]: Created slice Slice /user.
Jan 23 10:55:56 np0005593388.novalocal systemd[4307]: podman-17415.scope: unit configures an IP firewall, but not running as root.
Jan 23 10:55:56 np0005593388.novalocal systemd[4307]: (This warning is only shown for the first unit using IP firewalling.)
Jan 23 10:55:56 np0005593388.novalocal systemd[4307]: Started podman-17415.scope.
Jan 23 10:55:56 np0005593388.novalocal systemd[4307]: Started podman-pause-9b48bd51.scope.
Jan 23 10:55:59 np0005593388.novalocal sudo[19335]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ucdqcitlzmrrcnhkqsplnungevzylicj ; /usr/bin/python3'
Jan 23 10:55:59 np0005593388.novalocal sudo[19335]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:55:59 np0005593388.novalocal python3[19348]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]
                                                       location = "38.102.83.98:5001"
                                                       insecure = true path=/etc/containers/registries.conf block=[[registry]]
                                                       location = "38.102.83.98:5001"
                                                       insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:55:59 np0005593388.novalocal python3[19348]: ansible-ansible.builtin.blockinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Jan 23 10:56:00 np0005593388.novalocal sudo[19335]: pam_unix(sudo:session): session closed for user root
Jan 23 10:56:00 np0005593388.novalocal sshd-session[8075]: Connection closed by 38.102.83.114 port 54800
Jan 23 10:56:00 np0005593388.novalocal sshd-session[8072]: pam_unix(sshd:session): session closed for user zuul
Jan 23 10:56:00 np0005593388.novalocal systemd[1]: session-5.scope: Deactivated successfully.
Jan 23 10:56:00 np0005593388.novalocal systemd[1]: session-5.scope: Consumed 43.899s CPU time.
Jan 23 10:56:00 np0005593388.novalocal systemd-logind[798]: Session 5 logged out. Waiting for processes to exit.
Jan 23 10:56:00 np0005593388.novalocal systemd-logind[798]: Removed session 5.
Jan 23 10:56:19 np0005593388.novalocal sshd-session[28728]: Unable to negotiate with 38.102.83.196 port 44826: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Jan 23 10:56:19 np0005593388.novalocal sshd-session[28732]: Unable to negotiate with 38.102.83.196 port 44832: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Jan 23 10:56:19 np0005593388.novalocal sshd-session[28735]: Connection closed by 38.102.83.196 port 44804 [preauth]
Jan 23 10:56:20 np0005593388.novalocal sshd-session[28733]: Connection closed by 38.102.83.196 port 44810 [preauth]
Jan 23 10:56:20 np0005593388.novalocal sshd-session[28736]: Unable to negotiate with 38.102.83.196 port 44828: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Jan 23 10:56:21 np0005593388.novalocal systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 23 10:56:21 np0005593388.novalocal systemd[1]: Finished man-db-cache-update.service.
Jan 23 10:56:21 np0005593388.novalocal systemd[1]: man-db-cache-update.service: Consumed 47.867s CPU time.
Jan 23 10:56:21 np0005593388.novalocal systemd[1]: run-r86e894232a3541c6b231adb40818f881.service: Deactivated successfully.
Jan 23 10:56:24 np0005593388.novalocal sshd-session[29578]: Accepted publickey for zuul from 38.102.83.114 port 60704 ssh2: RSA SHA256:l5/z7/B1LZInfKNQYpI40S/PX6fnGwoDdxTfZ/2+PpU
Jan 23 10:56:24 np0005593388.novalocal systemd-logind[798]: New session 6 of user zuul.
Jan 23 10:56:24 np0005593388.novalocal systemd[1]: Started Session 6 of User zuul.
Jan 23 10:56:24 np0005593388.novalocal sshd-session[29578]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 23 10:56:25 np0005593388.novalocal python3[29605]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOYDxN/nSaeV+5B4KpR+mVU2gkD/Wh7SrM67+eCMK3rWaTGyXRCp+HscEmLLMYw6B2U5eyAI6KqoDOA2+gm6cXY= zuul@np0005593387.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 23 10:56:25 np0005593388.novalocal sudo[29629]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fczgxgwaakkxhgucabtuthzswwotyutz ; /usr/bin/python3'
Jan 23 10:56:25 np0005593388.novalocal sudo[29629]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:56:25 np0005593388.novalocal python3[29631]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOYDxN/nSaeV+5B4KpR+mVU2gkD/Wh7SrM67+eCMK3rWaTGyXRCp+HscEmLLMYw6B2U5eyAI6KqoDOA2+gm6cXY= zuul@np0005593387.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 23 10:56:25 np0005593388.novalocal sudo[29629]: pam_unix(sudo:session): session closed for user root
Jan 23 10:56:26 np0005593388.novalocal sudo[29655]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fmpqifsrlaighejgfeykmlnqqwbyovnl ; /usr/bin/python3'
Jan 23 10:56:26 np0005593388.novalocal sudo[29655]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:56:26 np0005593388.novalocal python3[29657]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005593388.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Jan 23 10:56:26 np0005593388.novalocal useradd[29659]: new group: name=cloud-admin, GID=1002
Jan 23 10:56:26 np0005593388.novalocal useradd[29659]: new user: name=cloud-admin, UID=1002, GID=1002, home=/home/cloud-admin, shell=/bin/bash, from=none
Jan 23 10:56:26 np0005593388.novalocal sudo[29655]: pam_unix(sudo:session): session closed for user root
Jan 23 10:56:26 np0005593388.novalocal sudo[29689]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idoihmxwvyzzrqrlykfecziqfcsyqubf ; /usr/bin/python3'
Jan 23 10:56:26 np0005593388.novalocal sudo[29689]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:56:26 np0005593388.novalocal python3[29691]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOYDxN/nSaeV+5B4KpR+mVU2gkD/Wh7SrM67+eCMK3rWaTGyXRCp+HscEmLLMYw6B2U5eyAI6KqoDOA2+gm6cXY= zuul@np0005593387.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 23 10:56:26 np0005593388.novalocal sudo[29689]: pam_unix(sudo:session): session closed for user root
Jan 23 10:56:26 np0005593388.novalocal sudo[29767]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zbmuxaeeeavidfexjoufpyofpkhdnwca ; /usr/bin/python3'
Jan 23 10:56:26 np0005593388.novalocal sudo[29767]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:56:27 np0005593388.novalocal python3[29769]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 23 10:56:27 np0005593388.novalocal sudo[29767]: pam_unix(sudo:session): session closed for user root
Jan 23 10:56:27 np0005593388.novalocal sudo[29840]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ezrckldizcfdzhorledsfzdekyayfjhf ; /usr/bin/python3'
Jan 23 10:56:27 np0005593388.novalocal sudo[29840]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:56:27 np0005593388.novalocal python3[29842]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1769165786.8041098-135-96898786271737/source _original_basename=tmpt0xkufh8 follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:56:27 np0005593388.novalocal sudo[29840]: pam_unix(sudo:session): session closed for user root
Jan 23 10:56:28 np0005593388.novalocal sudo[29890]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yyogdzoigzlklzdlyjesdipstejjiksn ; /usr/bin/python3'
Jan 23 10:56:28 np0005593388.novalocal sudo[29890]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:56:28 np0005593388.novalocal python3[29892]: ansible-ansible.builtin.hostname Invoked with name=compute-0 use=systemd
Jan 23 10:56:28 np0005593388.novalocal systemd[1]: Starting Hostname Service...
Jan 23 10:56:28 np0005593388.novalocal systemd[1]: Started Hostname Service.
Jan 23 10:56:28 np0005593388.novalocal systemd-hostnamed[29896]: Changed pretty hostname to 'compute-0'
Jan 23 10:56:28 compute-0 systemd-hostnamed[29896]: Hostname set to <compute-0> (static)
Jan 23 10:56:28 compute-0 NetworkManager[7190]: <info>  [1769165788.4504] hostname: static hostname changed from "np0005593388.novalocal" to "compute-0"
Jan 23 10:56:28 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 23 10:56:28 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 23 10:56:28 compute-0 sudo[29890]: pam_unix(sudo:session): session closed for user root
Jan 23 10:56:29 compute-0 sshd-session[29581]: Connection closed by 38.102.83.114 port 60704
Jan 23 10:56:29 compute-0 sshd-session[29578]: pam_unix(sshd:session): session closed for user zuul
Jan 23 10:56:29 compute-0 systemd[1]: session-6.scope: Deactivated successfully.
Jan 23 10:56:29 compute-0 systemd[1]: session-6.scope: Consumed 2.188s CPU time.
Jan 23 10:56:29 compute-0 systemd-logind[798]: Session 6 logged out. Waiting for processes to exit.
Jan 23 10:56:29 compute-0 systemd-logind[798]: Removed session 6.
Jan 23 10:56:38 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 23 10:56:58 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 23 10:59:52 compute-0 systemd[1]: Starting Cleanup of Temporary Directories...
Jan 23 10:59:52 compute-0 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Jan 23 10:59:52 compute-0 systemd[1]: Finished Cleanup of Temporary Directories.
Jan 23 10:59:52 compute-0 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Jan 23 11:00:52 compute-0 systemd[1]: Starting dnf makecache...
Jan 23 11:00:52 compute-0 dnf[29922]: Failed determining last makecache time.
Jan 23 11:00:52 compute-0 dnf[29922]: CentOS Stream 9 - BaseOS                         71 kB/s | 6.7 kB     00:00
Jan 23 11:00:53 compute-0 dnf[29922]: CentOS Stream 9 - AppStream                      75 kB/s | 6.8 kB     00:00
Jan 23 11:00:53 compute-0 dnf[29922]: CentOS Stream 9 - CRB                            60 kB/s | 6.6 kB     00:00
Jan 23 11:00:53 compute-0 dnf[29922]: CentOS Stream 9 - Extras packages                75 kB/s | 7.3 kB     00:00
Jan 23 11:00:53 compute-0 dnf[29922]: Metadata cache created.
Jan 23 11:00:53 compute-0 systemd[1]: dnf-makecache.service: Deactivated successfully.
Jan 23 11:00:53 compute-0 systemd[1]: Finished dnf makecache.
Jan 23 11:01:01 compute-0 CROND[29929]: (root) CMD (run-parts /etc/cron.hourly)
Jan 23 11:01:01 compute-0 run-parts[29932]: (/etc/cron.hourly) starting 0anacron
Jan 23 11:01:01 compute-0 anacron[29940]: Anacron started on 2026-01-23
Jan 23 11:01:01 compute-0 anacron[29940]: Will run job `cron.daily' in 38 min.
Jan 23 11:01:01 compute-0 anacron[29940]: Will run job `cron.weekly' in 58 min.
Jan 23 11:01:01 compute-0 anacron[29940]: Will run job `cron.monthly' in 78 min.
Jan 23 11:01:01 compute-0 anacron[29940]: Jobs will be executed sequentially
Jan 23 11:01:01 compute-0 run-parts[29942]: (/etc/cron.hourly) finished 0anacron
Jan 23 11:01:01 compute-0 CROND[29928]: (root) CMDEND (run-parts /etc/cron.hourly)
Jan 23 11:01:27 compute-0 sshd-session[29944]: Accepted publickey for zuul from 38.102.83.196 port 53384 ssh2: RSA SHA256:l5/z7/B1LZInfKNQYpI40S/PX6fnGwoDdxTfZ/2+PpU
Jan 23 11:01:27 compute-0 systemd-logind[798]: New session 7 of user zuul.
Jan 23 11:01:27 compute-0 systemd[1]: Started Session 7 of User zuul.
Jan 23 11:01:27 compute-0 sshd-session[29944]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 23 11:01:27 compute-0 python3[30020]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 11:01:29 compute-0 sudo[30134]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sniyempzuyvxhwsynvolzcoziuvxqbnt ; /usr/bin/python3'
Jan 23 11:01:29 compute-0 sudo[30134]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:01:29 compute-0 python3[30136]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 23 11:01:29 compute-0 sudo[30134]: pam_unix(sudo:session): session closed for user root
Jan 23 11:01:29 compute-0 sudo[30207]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ybdgtnswpibmshvycprkjqtbghtsltzi ; /usr/bin/python3'
Jan 23 11:01:29 compute-0 sudo[30207]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:01:29 compute-0 python3[30209]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769166089.03316-33613-196218994038238/source mode=0755 _original_basename=delorean.repo follow=False checksum=0f7c85cc67bf467c48edf98d5acc63e62d808324 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:01:29 compute-0 sudo[30207]: pam_unix(sudo:session): session closed for user root
Jan 23 11:01:29 compute-0 sudo[30233]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zmjmagfhbucowqfqgoppcdsnvuuiwgdl ; /usr/bin/python3'
Jan 23 11:01:29 compute-0 sudo[30233]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:01:30 compute-0 python3[30235]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 23 11:01:30 compute-0 sudo[30233]: pam_unix(sudo:session): session closed for user root
Jan 23 11:01:30 compute-0 sudo[30306]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ifexjdhdppmomotziaigfivkppghkjxo ; /usr/bin/python3'
Jan 23 11:01:30 compute-0 sudo[30306]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:01:30 compute-0 python3[30308]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769166089.03316-33613-196218994038238/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=4ebc56dead962b5d40b8d420dad43b948b84d3fc backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:01:30 compute-0 sudo[30306]: pam_unix(sudo:session): session closed for user root
Jan 23 11:01:30 compute-0 sudo[30332]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vqcgqfpofzzeeukplvjhfpfghstuxumo ; /usr/bin/python3'
Jan 23 11:01:30 compute-0 sudo[30332]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:01:30 compute-0 python3[30334]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 23 11:01:30 compute-0 sudo[30332]: pam_unix(sudo:session): session closed for user root
Jan 23 11:01:31 compute-0 sudo[30405]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-huuuxtjvqhznrmfztuumlgdzechycnqb ; /usr/bin/python3'
Jan 23 11:01:31 compute-0 sudo[30405]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:01:31 compute-0 python3[30407]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769166089.03316-33613-196218994038238/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:01:31 compute-0 sudo[30405]: pam_unix(sudo:session): session closed for user root
Jan 23 11:01:31 compute-0 sudo[30431]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kbcabmpujbrvbpxlgpkeevdhxdcmwpvj ; /usr/bin/python3'
Jan 23 11:01:31 compute-0 sudo[30431]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:01:31 compute-0 python3[30433]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 23 11:01:31 compute-0 sudo[30431]: pam_unix(sudo:session): session closed for user root
Jan 23 11:01:32 compute-0 sudo[30504]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bgrmjymxbujczxafgypeeiqhsafiftkc ; /usr/bin/python3'
Jan 23 11:01:32 compute-0 sudo[30504]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:01:32 compute-0 python3[30506]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769166089.03316-33613-196218994038238/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:01:32 compute-0 sudo[30504]: pam_unix(sudo:session): session closed for user root
Jan 23 11:01:32 compute-0 sudo[30530]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zmgytvoffkpbvvakytbgbmjtofuryges ; /usr/bin/python3'
Jan 23 11:01:32 compute-0 sudo[30530]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:01:32 compute-0 python3[30532]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 23 11:01:32 compute-0 sudo[30530]: pam_unix(sudo:session): session closed for user root
Jan 23 11:01:32 compute-0 sudo[30603]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eicthupyokjmpcxuvayaqckgkeishikq ; /usr/bin/python3'
Jan 23 11:01:32 compute-0 sudo[30603]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:01:32 compute-0 python3[30605]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769166089.03316-33613-196218994038238/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:01:32 compute-0 sudo[30603]: pam_unix(sudo:session): session closed for user root
Jan 23 11:01:32 compute-0 sudo[30629]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ntihfscuafpaarjuwjxtjxaqmoubyhhg ; /usr/bin/python3'
Jan 23 11:01:32 compute-0 sudo[30629]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:01:33 compute-0 python3[30631]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 23 11:01:33 compute-0 sudo[30629]: pam_unix(sudo:session): session closed for user root
Jan 23 11:01:33 compute-0 sudo[30702]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-afparglzbzhletlesuuwhhzplqcenhov ; /usr/bin/python3'
Jan 23 11:01:33 compute-0 sudo[30702]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:01:33 compute-0 python3[30704]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769166089.03316-33613-196218994038238/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:01:33 compute-0 sudo[30702]: pam_unix(sudo:session): session closed for user root
Jan 23 11:01:33 compute-0 sudo[30728]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ujrxonwkqbrxpbksleblszywajxjzrev ; /usr/bin/python3'
Jan 23 11:01:33 compute-0 sudo[30728]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:01:33 compute-0 python3[30730]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 23 11:01:33 compute-0 sudo[30728]: pam_unix(sudo:session): session closed for user root
Jan 23 11:01:33 compute-0 sudo[30801]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qtiiggwhocgjaljycktunmqslwayzwkl ; /usr/bin/python3'
Jan 23 11:01:33 compute-0 sudo[30801]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:01:33 compute-0 python3[30803]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769166089.03316-33613-196218994038238/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=2583a70b3ee76a9837350b0837bc004a8e52405c backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:01:33 compute-0 sudo[30801]: pam_unix(sudo:session): session closed for user root
Jan 23 11:01:36 compute-0 sshd-session[30829]: Unable to negotiate with 192.168.122.11 port 51546: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Jan 23 11:01:36 compute-0 sshd-session[30828]: Connection closed by 192.168.122.11 port 51518 [preauth]
Jan 23 11:01:36 compute-0 sshd-session[30830]: Connection closed by 192.168.122.11 port 51532 [preauth]
Jan 23 11:01:36 compute-0 sshd-session[30833]: Unable to negotiate with 192.168.122.11 port 51538: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Jan 23 11:01:36 compute-0 sshd-session[30831]: Unable to negotiate with 192.168.122.11 port 51556: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Jan 23 11:03:29 compute-0 sshd-session[30839]: Connection closed by 193.32.162.146 port 58672
Jan 23 11:04:25 compute-0 python3[30863]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 11:08:10 compute-0 sshd-session[30865]: Unable to negotiate with 110.44.115.68 port 49983: no matching host key type found. Their offer: ssh-rsa,ssh-dss [preauth]
Jan 23 11:08:32 compute-0 sshd-session[30868]: Invalid user ubuntu from 193.32.162.146 port 59404
Jan 23 11:08:32 compute-0 sshd-session[30868]: Connection closed by invalid user ubuntu 193.32.162.146 port 59404 [preauth]
Jan 23 11:08:45 compute-0 sshd-session[30871]: error: kex_exchange_identification: read: Connection reset by peer
Jan 23 11:08:45 compute-0 sshd-session[30871]: Connection reset by 176.120.22.52 port 8667
Jan 23 11:09:24 compute-0 sshd-session[29947]: Received disconnect from 38.102.83.196 port 53384:11: disconnected by user
Jan 23 11:09:24 compute-0 sshd-session[29947]: Disconnected from user zuul 38.102.83.196 port 53384
Jan 23 11:09:24 compute-0 sshd-session[29944]: pam_unix(sshd:session): session closed for user zuul
Jan 23 11:09:24 compute-0 systemd[1]: session-7.scope: Deactivated successfully.
Jan 23 11:09:24 compute-0 systemd[1]: session-7.scope: Consumed 4.676s CPU time.
Jan 23 11:09:24 compute-0 systemd-logind[798]: Session 7 logged out. Waiting for processes to exit.
Jan 23 11:09:24 compute-0 systemd-logind[798]: Removed session 7.
Jan 23 11:10:52 compute-0 sshd-session[30874]: Invalid user validator from 193.32.162.146 port 39552
Jan 23 11:10:52 compute-0 sshd-session[30874]: Connection closed by invalid user validator 193.32.162.146 port 39552 [preauth]
Jan 23 11:13:10 compute-0 sshd-session[30876]: Invalid user node from 193.32.162.146 port 47938
Jan 23 11:13:10 compute-0 sshd-session[30876]: Connection closed by invalid user node 193.32.162.146 port 47938 [preauth]
Jan 23 11:15:21 compute-0 sshd-session[30878]: Invalid user solana from 193.32.162.146 port 56326
Jan 23 11:15:21 compute-0 sshd-session[30878]: Connection closed by invalid user solana 193.32.162.146 port 56326 [preauth]
Jan 23 11:17:23 compute-0 sshd-session[30880]: Connection closed by 58.82.169.249 port 55734
Jan 23 11:17:30 compute-0 sshd-session[30881]: Invalid user sol from 193.32.162.146 port 36464
Jan 23 11:17:30 compute-0 sshd-session[30881]: Connection closed by invalid user sol 193.32.162.146 port 36464 [preauth]
Jan 23 11:18:43 compute-0 sshd-session[30883]: Accepted publickey for zuul from 192.168.122.30 port 58088 ssh2: ECDSA SHA256:AUEDGm/wgPOySUg5KweIs4KJvJDZMkuE7T7y2BxO92Y
Jan 23 11:18:43 compute-0 systemd-logind[798]: New session 8 of user zuul.
Jan 23 11:18:43 compute-0 systemd[1]: Started Session 8 of User zuul.
Jan 23 11:18:43 compute-0 sshd-session[30883]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 23 11:18:44 compute-0 python3.9[31036]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 11:18:45 compute-0 sudo[31215]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qqbsgckzlcqfduvjjymmhgdatuzafvcs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167124.8159263-27-101038045887991/AnsiballZ_command.py'
Jan 23 11:18:45 compute-0 sudo[31215]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:18:45 compute-0 python3.9[31217]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                            pushd /var/tmp
                                            curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                            pushd repo-setup-main
                                            python3 -m venv ./venv
                                            PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                            ./venv/bin/repo-setup current-podified -b antelope
                                            popd
                                            rm -rf repo-setup-main
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 11:18:52 compute-0 sudo[31215]: pam_unix(sudo:session): session closed for user root
Jan 23 11:18:52 compute-0 sshd-session[30886]: Connection closed by 192.168.122.30 port 58088
Jan 23 11:18:52 compute-0 sshd-session[30883]: pam_unix(sshd:session): session closed for user zuul
Jan 23 11:18:52 compute-0 systemd[1]: session-8.scope: Deactivated successfully.
Jan 23 11:18:52 compute-0 systemd[1]: session-8.scope: Consumed 7.871s CPU time.
Jan 23 11:18:52 compute-0 systemd-logind[798]: Session 8 logged out. Waiting for processes to exit.
Jan 23 11:18:52 compute-0 systemd-logind[798]: Removed session 8.
Jan 23 11:18:58 compute-0 sshd-session[31274]: Accepted publickey for zuul from 192.168.122.30 port 42572 ssh2: ECDSA SHA256:AUEDGm/wgPOySUg5KweIs4KJvJDZMkuE7T7y2BxO92Y
Jan 23 11:18:58 compute-0 systemd-logind[798]: New session 9 of user zuul.
Jan 23 11:18:58 compute-0 systemd[1]: Started Session 9 of User zuul.
Jan 23 11:18:58 compute-0 sshd-session[31274]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 23 11:18:59 compute-0 python3.9[31427]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 11:18:59 compute-0 sshd-session[31277]: Connection closed by 192.168.122.30 port 42572
Jan 23 11:18:59 compute-0 sshd-session[31274]: pam_unix(sshd:session): session closed for user zuul
Jan 23 11:18:59 compute-0 systemd[1]: session-9.scope: Deactivated successfully.
Jan 23 11:18:59 compute-0 systemd-logind[798]: Session 9 logged out. Waiting for processes to exit.
Jan 23 11:18:59 compute-0 systemd-logind[798]: Removed session 9.
Jan 23 11:19:16 compute-0 sshd-session[31454]: Accepted publickey for zuul from 192.168.122.30 port 40594 ssh2: ECDSA SHA256:AUEDGm/wgPOySUg5KweIs4KJvJDZMkuE7T7y2BxO92Y
Jan 23 11:19:16 compute-0 systemd-logind[798]: New session 10 of user zuul.
Jan 23 11:19:16 compute-0 systemd[1]: Started Session 10 of User zuul.
Jan 23 11:19:16 compute-0 sshd-session[31454]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 23 11:19:16 compute-0 python3.9[31607]: ansible-ansible.legacy.ping Invoked with data=pong
Jan 23 11:19:17 compute-0 python3.9[31781]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 11:19:18 compute-0 sudo[31931]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cbkkovxqrckfkoirmdoflkfmzbzjfcdq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167158.0556276-40-211622822061402/AnsiballZ_command.py'
Jan 23 11:19:18 compute-0 sudo[31931]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:19:18 compute-0 python3.9[31933]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 11:19:18 compute-0 sudo[31931]: pam_unix(sudo:session): session closed for user root
Jan 23 11:19:19 compute-0 sudo[32084]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ywqrhpsjdjjcvjeqmkkmwntzmeoefyac ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167158.844448-52-12015395004343/AnsiballZ_stat.py'
Jan 23 11:19:19 compute-0 sudo[32084]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:19:19 compute-0 python3.9[32086]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 11:19:19 compute-0 sudo[32084]: pam_unix(sudo:session): session closed for user root
Jan 23 11:19:19 compute-0 sudo[32236]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idtlhxhotczkdgqbvyfmesejntcabrqc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167159.5344124-60-67120842832216/AnsiballZ_file.py'
Jan 23 11:19:20 compute-0 sudo[32236]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:19:20 compute-0 python3.9[32238]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:19:20 compute-0 sudo[32236]: pam_unix(sudo:session): session closed for user root
Jan 23 11:19:20 compute-0 sudo[32388]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fvegwemedumoajamqjachrmrfoqjdjqk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167160.336947-68-123219528634604/AnsiballZ_stat.py'
Jan 23 11:19:20 compute-0 sudo[32388]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:19:20 compute-0 python3.9[32390]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:19:20 compute-0 sudo[32388]: pam_unix(sudo:session): session closed for user root
Jan 23 11:19:21 compute-0 sudo[32511]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-unlccrritorynzjaewcagrkdhranudze ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167160.336947-68-123219528634604/AnsiballZ_copy.py'
Jan 23 11:19:21 compute-0 sudo[32511]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:19:21 compute-0 python3.9[32513]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1769167160.336947-68-123219528634604/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:19:21 compute-0 sudo[32511]: pam_unix(sudo:session): session closed for user root
Jan 23 11:19:21 compute-0 sudo[32663]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yweaetyuctwsxubuevbedptmiqkfgeco ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167161.616268-83-187598319099675/AnsiballZ_setup.py'
Jan 23 11:19:21 compute-0 sudo[32663]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:19:22 compute-0 python3.9[32665]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 11:19:22 compute-0 sudo[32663]: pam_unix(sudo:session): session closed for user root
Jan 23 11:19:22 compute-0 sudo[32819]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uplnvbwyhnefmmdqmqyezlqhnjbhwyqn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167162.5366468-91-74590565188246/AnsiballZ_file.py'
Jan 23 11:19:22 compute-0 sudo[32819]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:19:23 compute-0 python3.9[32821]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 11:19:23 compute-0 sudo[32819]: pam_unix(sudo:session): session closed for user root
Jan 23 11:19:23 compute-0 sudo[32971]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mbcyvayqwbzdaadquxjbwsxqepfwrmge ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167163.2155879-100-110669775407775/AnsiballZ_file.py'
Jan 23 11:19:23 compute-0 sudo[32971]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:19:23 compute-0 python3.9[32973]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 11:19:23 compute-0 sudo[32971]: pam_unix(sudo:session): session closed for user root
Jan 23 11:19:24 compute-0 python3.9[33123]: ansible-ansible.builtin.service_facts Invoked
Jan 23 11:19:28 compute-0 python3.9[33376]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:19:29 compute-0 python3.9[33526]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 11:19:30 compute-0 python3.9[33680]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 11:19:30 compute-0 sudo[33836]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bpiwbnqndzeohnaehsdxvnizqmkkorqg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167170.5994327-148-2069480254468/AnsiballZ_setup.py'
Jan 23 11:19:30 compute-0 sudo[33836]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:19:31 compute-0 python3.9[33838]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 23 11:19:31 compute-0 sudo[33836]: pam_unix(sudo:session): session closed for user root
Jan 23 11:19:31 compute-0 sudo[33920]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sspixmogzhheneytnfgzlnhiymjqsdsi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167170.5994327-148-2069480254468/AnsiballZ_dnf.py'
Jan 23 11:19:31 compute-0 sudo[33920]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:19:31 compute-0 python3.9[33922]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 23 11:19:38 compute-0 sshd-session[33991]: Invalid user sol from 193.32.162.146 port 44826
Jan 23 11:19:38 compute-0 sshd-session[33991]: Connection closed by invalid user sol 193.32.162.146 port 44826 [preauth]
Jan 23 11:20:30 compute-0 systemd[1]: Reloading.
Jan 23 11:20:30 compute-0 systemd-rc-local-generator[34121]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 11:20:30 compute-0 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Jan 23 11:20:30 compute-0 systemd[1]: Reloading.
Jan 23 11:20:30 compute-0 systemd-rc-local-generator[34164]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 11:20:30 compute-0 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Jan 23 11:20:30 compute-0 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Jan 23 11:20:30 compute-0 systemd[1]: Reloading.
Jan 23 11:20:30 compute-0 systemd-rc-local-generator[34202]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 11:20:31 compute-0 systemd[1]: Listening on LVM2 poll daemon socket.
Jan 23 11:20:31 compute-0 dbus-broker-launch[751]: Noticed file-system modification, trigger reload.
Jan 23 11:20:31 compute-0 dbus-broker-launch[751]: Noticed file-system modification, trigger reload.
Jan 23 11:20:31 compute-0 dbus-broker-launch[751]: Noticed file-system modification, trigger reload.
Jan 23 11:21:36 compute-0 kernel: SELinux:  Converting 2724 SID table entries...
Jan 23 11:21:36 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 23 11:21:36 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 23 11:21:36 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 23 11:21:36 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 23 11:21:36 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 23 11:21:36 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 23 11:21:36 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 23 11:21:37 compute-0 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Jan 23 11:21:37 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 23 11:21:37 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 23 11:21:37 compute-0 systemd[1]: Reloading.
Jan 23 11:21:37 compute-0 systemd-rc-local-generator[34518]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 11:21:37 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 23 11:21:37 compute-0 sudo[33920]: pam_unix(sudo:session): session closed for user root
Jan 23 11:21:38 compute-0 sudo[35433]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xkrxvbmpiqjtindizdoxoncawlgpuogj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167298.0674658-160-182124296744113/AnsiballZ_command.py'
Jan 23 11:21:38 compute-0 sudo[35433]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:21:38 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 23 11:21:38 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 23 11:21:38 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1.225s CPU time.
Jan 23 11:21:38 compute-0 systemd[1]: run-rd464e5c466e34b02b05861820d19e4a0.service: Deactivated successfully.
Jan 23 11:21:38 compute-0 python3.9[35435]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 11:21:39 compute-0 sudo[35433]: pam_unix(sudo:session): session closed for user root
Jan 23 11:21:39 compute-0 sshd-session[35566]: Invalid user sol from 193.32.162.146 port 53196
Jan 23 11:21:39 compute-0 sshd-session[35566]: Connection closed by invalid user sol 193.32.162.146 port 53196 [preauth]
Jan 23 11:21:39 compute-0 sudo[35717]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kxawkcvlcblsknpslouvxgahdmtywerm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167299.3698735-168-228797135563077/AnsiballZ_selinux.py'
Jan 23 11:21:39 compute-0 sudo[35717]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:21:40 compute-0 python3.9[35719]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Jan 23 11:21:40 compute-0 sudo[35717]: pam_unix(sudo:session): session closed for user root
Jan 23 11:21:41 compute-0 sudo[35869]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sacffislbwnjqgyjntftsplqzfqtwoiw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167301.06814-179-69853898343813/AnsiballZ_command.py'
Jan 23 11:21:41 compute-0 sudo[35869]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:21:41 compute-0 python3.9[35871]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Jan 23 11:21:43 compute-0 sudo[35869]: pam_unix(sudo:session): session closed for user root
Jan 23 11:21:44 compute-0 sudo[36023]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhqcmwtcsgutcvxynlimxfumyugqnnmh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167303.9388268-187-163084575781001/AnsiballZ_file.py'
Jan 23 11:21:44 compute-0 sudo[36023]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:21:44 compute-0 python3.9[36025]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:21:44 compute-0 sudo[36023]: pam_unix(sudo:session): session closed for user root
Jan 23 11:21:44 compute-0 irqbalance[788]: Cannot change IRQ 26 affinity: Operation not permitted
Jan 23 11:21:44 compute-0 irqbalance[788]: IRQ 26 affinity is now unmanaged
Jan 23 11:21:45 compute-0 sudo[36175]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wgsfthbwebkeiopfedvvjxwytjkvauxy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167305.0606377-195-82498108676480/AnsiballZ_mount.py'
Jan 23 11:21:45 compute-0 sudo[36175]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:21:45 compute-0 python3.9[36177]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Jan 23 11:21:45 compute-0 sudo[36175]: pam_unix(sudo:session): session closed for user root
Jan 23 11:21:47 compute-0 sudo[36327]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dgvffuowvugieyikpwyvyhwcrkvhjchy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167306.9334848-223-40639746906638/AnsiballZ_file.py'
Jan 23 11:21:47 compute-0 sudo[36327]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:21:47 compute-0 python3.9[36329]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 11:21:47 compute-0 sudo[36327]: pam_unix(sudo:session): session closed for user root
Jan 23 11:21:47 compute-0 sudo[36479]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-emdhnrifazwwddabfahcoinbktzxxlrd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167307.579016-231-186850467928074/AnsiballZ_stat.py'
Jan 23 11:21:47 compute-0 sudo[36479]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:21:48 compute-0 python3.9[36481]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:21:48 compute-0 sudo[36479]: pam_unix(sudo:session): session closed for user root
Jan 23 11:21:48 compute-0 sudo[36602]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vdkgadspnoejizskdliqkivdsgrgslsi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167307.579016-231-186850467928074/AnsiballZ_copy.py'
Jan 23 11:21:48 compute-0 sudo[36602]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:21:48 compute-0 python3.9[36604]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769167307.579016-231-186850467928074/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=7ae6045ec33785a1ca6529615216791fa27be4f7 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:21:48 compute-0 sudo[36602]: pam_unix(sudo:session): session closed for user root
Jan 23 11:21:49 compute-0 sudo[36754]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dkdufllxtuzfaoxpzdjmojxgenmxmscm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167309.1477296-255-121185763660721/AnsiballZ_stat.py'
Jan 23 11:21:49 compute-0 sudo[36754]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:21:52 compute-0 python3.9[36756]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 11:21:52 compute-0 sudo[36754]: pam_unix(sudo:session): session closed for user root
Jan 23 11:21:52 compute-0 sudo[36907]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-couaooaxluieygxreoefltzrwxhsjlmi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167312.202778-263-41200641031807/AnsiballZ_command.py'
Jan 23 11:21:52 compute-0 sudo[36907]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:21:52 compute-0 python3.9[36909]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/vgimportdevices --all _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 11:21:52 compute-0 sudo[36907]: pam_unix(sudo:session): session closed for user root
Jan 23 11:21:53 compute-0 sudo[37060]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ebmqelffyaeeztcifnrwlhaaapymnwgk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167312.8323152-271-9645382474039/AnsiballZ_file.py'
Jan 23 11:21:53 compute-0 sudo[37060]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:21:55 compute-0 python3.9[37062]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/lvm/devices/system.devices state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:21:55 compute-0 sudo[37060]: pam_unix(sudo:session): session closed for user root
Jan 23 11:21:56 compute-0 sudo[37212]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pyqcnecfdkyntqyxqxdhqyrodzukpszv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167315.5427296-282-126075618531916/AnsiballZ_getent.py'
Jan 23 11:21:56 compute-0 sudo[37212]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:21:56 compute-0 python3.9[37214]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Jan 23 11:21:56 compute-0 sudo[37212]: pam_unix(sudo:session): session closed for user root
Jan 23 11:21:56 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 23 11:21:56 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 23 11:21:57 compute-0 sudo[37366]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hrvajgftzyzijolnwigplqecyudzowcn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167316.4908276-290-152819005563717/AnsiballZ_group.py'
Jan 23 11:21:57 compute-0 sudo[37366]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:21:57 compute-0 python3.9[37368]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 23 11:21:57 compute-0 groupadd[37369]: group added to /etc/group: name=qemu, GID=107
Jan 23 11:21:57 compute-0 groupadd[37369]: group added to /etc/gshadow: name=qemu
Jan 23 11:21:57 compute-0 groupadd[37369]: new group: name=qemu, GID=107
Jan 23 11:21:57 compute-0 sudo[37366]: pam_unix(sudo:session): session closed for user root
Jan 23 11:21:58 compute-0 sudo[37524]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ioyekbrkoervbavfqfqnfpuxfhfidrcy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167317.49439-298-161629318640313/AnsiballZ_user.py'
Jan 23 11:21:58 compute-0 sudo[37524]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:21:58 compute-0 python3.9[37526]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 23 11:21:58 compute-0 useradd[37528]: new user: name=qemu, UID=107, GID=107, home=/home/qemu, shell=/sbin/nologin, from=/dev/pts/0
Jan 23 11:21:58 compute-0 sudo[37524]: pam_unix(sudo:session): session closed for user root
Jan 23 11:21:58 compute-0 sudo[37684]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ycxicsszlffbweqnymopexvombizgvke ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167318.5223994-306-229693968012933/AnsiballZ_getent.py'
Jan 23 11:21:58 compute-0 sudo[37684]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:21:58 compute-0 python3.9[37686]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Jan 23 11:21:59 compute-0 sudo[37684]: pam_unix(sudo:session): session closed for user root
Jan 23 11:21:59 compute-0 sudo[37837]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-skvrwaybvjrcdkvetrlzoinnaqsbrzrk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167319.1542912-314-260387093116644/AnsiballZ_group.py'
Jan 23 11:21:59 compute-0 sudo[37837]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:21:59 compute-0 python3.9[37839]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 23 11:21:59 compute-0 groupadd[37840]: group added to /etc/group: name=hugetlbfs, GID=42477
Jan 23 11:21:59 compute-0 groupadd[37840]: group added to /etc/gshadow: name=hugetlbfs
Jan 23 11:21:59 compute-0 groupadd[37840]: new group: name=hugetlbfs, GID=42477
Jan 23 11:21:59 compute-0 sudo[37837]: pam_unix(sudo:session): session closed for user root
Jan 23 11:22:00 compute-0 sudo[37995]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-anmbmnpkcsnbgolbwrgobpzgeorfypky ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167320.018525-323-219020452395113/AnsiballZ_file.py'
Jan 23 11:22:00 compute-0 sudo[37995]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:22:00 compute-0 python3.9[37997]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Jan 23 11:22:00 compute-0 sudo[37995]: pam_unix(sudo:session): session closed for user root
Jan 23 11:22:01 compute-0 sudo[38147]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cilyahcwhicrywkmccaivlbqhuhccqfn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167320.7940042-334-31308497061062/AnsiballZ_dnf.py'
Jan 23 11:22:01 compute-0 sudo[38147]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:22:01 compute-0 python3.9[38149]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 23 11:22:07 compute-0 sudo[38147]: pam_unix(sudo:session): session closed for user root
Jan 23 11:22:08 compute-0 sudo[38300]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tlemkilegonncjplvazicqoxxqrkbpiu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167327.6167746-342-100223675787786/AnsiballZ_file.py'
Jan 23 11:22:08 compute-0 sudo[38300]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:22:08 compute-0 python3.9[38302]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 11:22:08 compute-0 sudo[38300]: pam_unix(sudo:session): session closed for user root
Jan 23 11:22:08 compute-0 sudo[38452]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xgeojawmqmsknippgfanjwlgcbbktvvo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167328.3572257-350-172465141986614/AnsiballZ_stat.py'
Jan 23 11:22:08 compute-0 sudo[38452]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:22:08 compute-0 python3.9[38454]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:22:08 compute-0 sudo[38452]: pam_unix(sudo:session): session closed for user root
Jan 23 11:22:09 compute-0 sudo[38575]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-stabooolejjepvzncwpcfplaaqajmchf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167328.3572257-350-172465141986614/AnsiballZ_copy.py'
Jan 23 11:22:09 compute-0 sudo[38575]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:22:09 compute-0 python3.9[38577]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769167328.3572257-350-172465141986614/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 23 11:22:09 compute-0 sudo[38575]: pam_unix(sudo:session): session closed for user root
Jan 23 11:22:09 compute-0 sudo[38727]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gfhhzdltqkxgfhrebfasczflixiuxhgv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167329.451079-365-17058285213946/AnsiballZ_systemd.py'
Jan 23 11:22:10 compute-0 sudo[38727]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:22:10 compute-0 python3.9[38729]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 23 11:22:10 compute-0 systemd[1]: Starting Load Kernel Modules...
Jan 23 11:22:10 compute-0 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Jan 23 11:22:10 compute-0 kernel: Bridge firewalling registered
Jan 23 11:22:10 compute-0 systemd-modules-load[38733]: Inserted module 'br_netfilter'
Jan 23 11:22:10 compute-0 systemd[1]: Finished Load Kernel Modules.
Jan 23 11:22:10 compute-0 sudo[38727]: pam_unix(sudo:session): session closed for user root
Jan 23 11:22:10 compute-0 sudo[38887]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aeuxotezaqgvpcfflpfcbjkoqocubmop ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167330.5465214-373-71192079156655/AnsiballZ_stat.py'
Jan 23 11:22:10 compute-0 sudo[38887]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:22:10 compute-0 python3.9[38889]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:22:10 compute-0 sudo[38887]: pam_unix(sudo:session): session closed for user root
Jan 23 11:22:11 compute-0 sudo[39010]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-conhlrycxuiutubsixzavrtwifnpmcjr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167330.5465214-373-71192079156655/AnsiballZ_copy.py'
Jan 23 11:22:11 compute-0 sudo[39010]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:22:11 compute-0 python3.9[39012]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769167330.5465214-373-71192079156655/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 23 11:22:11 compute-0 sudo[39010]: pam_unix(sudo:session): session closed for user root
Jan 23 11:22:12 compute-0 sudo[39162]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kpnyyxpuilxwrjseftuzpanwtljrgpic ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167331.8418415-391-153526852884924/AnsiballZ_dnf.py'
Jan 23 11:22:12 compute-0 sudo[39162]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:22:12 compute-0 python3.9[39164]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 23 11:22:15 compute-0 dbus-broker-launch[751]: Noticed file-system modification, trigger reload.
Jan 23 11:22:16 compute-0 dbus-broker-launch[751]: Noticed file-system modification, trigger reload.
Jan 23 11:22:16 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 23 11:22:16 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 23 11:22:16 compute-0 systemd[1]: Reloading.
Jan 23 11:22:16 compute-0 systemd-rc-local-generator[39223]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 11:22:16 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 23 11:22:17 compute-0 sudo[39162]: pam_unix(sudo:session): session closed for user root
Jan 23 11:22:18 compute-0 python3.9[40974]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 11:22:19 compute-0 python3.9[42012]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Jan 23 11:22:19 compute-0 python3.9[43053]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 11:22:20 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 23 11:22:20 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 23 11:22:20 compute-0 systemd[1]: man-db-cache-update.service: Consumed 4.694s CPU time.
Jan 23 11:22:20 compute-0 systemd[1]: run-re9021e1fedb94a09adeabe050872e94d.service: Deactivated successfully.
Jan 23 11:22:20 compute-0 sudo[43332]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ervvdcfjmvpnvazwdhiwzttitwosfnfm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167340.3028023-430-194430537000551/AnsiballZ_command.py'
Jan 23 11:22:20 compute-0 sudo[43332]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:22:20 compute-0 python3.9[43334]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 11:22:20 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Jan 23 11:22:21 compute-0 systemd[1]: Starting Authorization Manager...
Jan 23 11:22:21 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Jan 23 11:22:21 compute-0 polkitd[43551]: Started polkitd version 0.117
Jan 23 11:22:21 compute-0 polkitd[43551]: Loading rules from directory /etc/polkit-1/rules.d
Jan 23 11:22:21 compute-0 polkitd[43551]: Loading rules from directory /usr/share/polkit-1/rules.d
Jan 23 11:22:21 compute-0 polkitd[43551]: Finished loading, compiling and executing 2 rules
Jan 23 11:22:21 compute-0 polkitd[43551]: Acquired the name org.freedesktop.PolicyKit1 on the system bus
Jan 23 11:22:21 compute-0 systemd[1]: Started Authorization Manager.
Jan 23 11:22:21 compute-0 sudo[43332]: pam_unix(sudo:session): session closed for user root
Jan 23 11:22:21 compute-0 sudo[43719]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ilqjdgxdlkpakzkjapefvryfgzdvbufx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167341.4550965-439-206784834361743/AnsiballZ_systemd.py'
Jan 23 11:22:21 compute-0 sudo[43719]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:22:22 compute-0 python3.9[43721]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 11:22:22 compute-0 systemd[1]: Stopping Dynamic System Tuning Daemon...
Jan 23 11:22:22 compute-0 systemd[1]: tuned.service: Deactivated successfully.
Jan 23 11:22:22 compute-0 systemd[1]: Stopped Dynamic System Tuning Daemon.
Jan 23 11:22:22 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Jan 23 11:22:22 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Jan 23 11:22:22 compute-0 sudo[43719]: pam_unix(sudo:session): session closed for user root
Jan 23 11:22:22 compute-0 python3.9[43883]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Jan 23 11:22:24 compute-0 sudo[44033]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lvlyqoekvxrflqrojtjqgwznceelxhjm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167344.6528285-496-242666306888376/AnsiballZ_systemd.py'
Jan 23 11:22:24 compute-0 sudo[44033]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:22:25 compute-0 python3.9[44035]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 11:22:25 compute-0 systemd[1]: Reloading.
Jan 23 11:22:25 compute-0 systemd-rc-local-generator[44065]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 11:22:25 compute-0 sudo[44033]: pam_unix(sudo:session): session closed for user root
Jan 23 11:22:25 compute-0 sudo[44222]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kvrejbiponwwvwcjqjyjttuahpkduoat ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167345.6079686-496-190135811995833/AnsiballZ_systemd.py'
Jan 23 11:22:25 compute-0 sudo[44222]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:22:26 compute-0 python3.9[44224]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 11:22:26 compute-0 systemd[1]: Reloading.
Jan 23 11:22:26 compute-0 systemd-rc-local-generator[44252]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 11:22:26 compute-0 sudo[44222]: pam_unix(sudo:session): session closed for user root
Jan 23 11:22:26 compute-0 sudo[44412]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hrbjqehieyozsfktmhyujaolumcvknzy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167346.6036584-512-236110998557434/AnsiballZ_command.py'
Jan 23 11:22:26 compute-0 sudo[44412]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:22:27 compute-0 python3.9[44414]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 11:22:27 compute-0 sudo[44412]: pam_unix(sudo:session): session closed for user root
Jan 23 11:22:27 compute-0 sudo[44565]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nzducqitmbogtdapqlychcobahemxnlu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167347.1941745-520-94465314720764/AnsiballZ_command.py'
Jan 23 11:22:27 compute-0 sudo[44565]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:22:27 compute-0 python3.9[44567]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 11:22:27 compute-0 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Jan 23 11:22:27 compute-0 sudo[44565]: pam_unix(sudo:session): session closed for user root
Jan 23 11:22:28 compute-0 sudo[44718]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rwmdqususmmiyisomerkhbswfqqcwmit ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167348.082184-528-163814985779895/AnsiballZ_command.py'
Jan 23 11:22:28 compute-0 sudo[44718]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:22:28 compute-0 python3.9[44720]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 11:22:29 compute-0 sudo[44718]: pam_unix(sudo:session): session closed for user root
Jan 23 11:22:30 compute-0 sudo[44880]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vyfdeytcasnyxdzssspshvzvywcfbdbs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167350.1165361-536-179458195849720/AnsiballZ_command.py'
Jan 23 11:22:30 compute-0 sudo[44880]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:22:30 compute-0 python3.9[44882]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 11:22:30 compute-0 sudo[44880]: pam_unix(sudo:session): session closed for user root
Jan 23 11:22:31 compute-0 sudo[45033]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lpyvnuxcgjrsjccwlunuxyqyhxewnghm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167350.843446-544-191845927582880/AnsiballZ_systemd.py'
Jan 23 11:22:31 compute-0 sudo[45033]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:22:31 compute-0 python3.9[45035]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 23 11:22:31 compute-0 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Jan 23 11:22:31 compute-0 systemd[1]: Stopped Apply Kernel Variables.
Jan 23 11:22:31 compute-0 systemd[1]: Stopping Apply Kernel Variables...
Jan 23 11:22:31 compute-0 systemd[1]: Starting Apply Kernel Variables...
Jan 23 11:22:31 compute-0 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Jan 23 11:22:31 compute-0 systemd[1]: Finished Apply Kernel Variables.
Jan 23 11:22:31 compute-0 sudo[45033]: pam_unix(sudo:session): session closed for user root
Jan 23 11:22:31 compute-0 sshd-session[31457]: Connection closed by 192.168.122.30 port 40594
Jan 23 11:22:31 compute-0 sshd-session[31454]: pam_unix(sshd:session): session closed for user zuul
Jan 23 11:22:31 compute-0 systemd-logind[798]: Session 10 logged out. Waiting for processes to exit.
Jan 23 11:22:31 compute-0 systemd[1]: session-10.scope: Deactivated successfully.
Jan 23 11:22:31 compute-0 systemd[1]: session-10.scope: Consumed 2min 31.235s CPU time.
Jan 23 11:22:31 compute-0 systemd-logind[798]: Removed session 10.
Jan 23 11:22:39 compute-0 sshd-session[45065]: Accepted publickey for zuul from 192.168.122.30 port 58626 ssh2: ECDSA SHA256:AUEDGm/wgPOySUg5KweIs4KJvJDZMkuE7T7y2BxO92Y
Jan 23 11:22:39 compute-0 systemd-logind[798]: New session 11 of user zuul.
Jan 23 11:22:39 compute-0 systemd[1]: Started Session 11 of User zuul.
Jan 23 11:22:39 compute-0 sshd-session[45065]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 23 11:22:39 compute-0 python3.9[45218]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 11:22:41 compute-0 python3.9[45372]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 11:22:41 compute-0 sudo[45526]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xfjgdruxdnfvhvymxuxrpmnzehvhuohk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167361.6118708-45-6416500604454/AnsiballZ_command.py'
Jan 23 11:22:41 compute-0 sudo[45526]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:22:42 compute-0 python3.9[45528]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 11:22:42 compute-0 sudo[45526]: pam_unix(sudo:session): session closed for user root
Jan 23 11:22:43 compute-0 python3.9[45679]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 11:22:43 compute-0 sudo[45833]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-atzgnetfakhqhulrpnbnbebgkvveepzy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167363.5098484-65-27380111812529/AnsiballZ_setup.py'
Jan 23 11:22:43 compute-0 sudo[45833]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:22:44 compute-0 python3.9[45835]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 23 11:22:44 compute-0 sudo[45833]: pam_unix(sudo:session): session closed for user root
Jan 23 11:22:44 compute-0 sudo[45917]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xfpvwjtbnrvnoskujhyrjqjrqkwzjuzv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167363.5098484-65-27380111812529/AnsiballZ_dnf.py'
Jan 23 11:22:44 compute-0 sudo[45917]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:22:44 compute-0 python3.9[45919]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 23 11:22:46 compute-0 sudo[45917]: pam_unix(sudo:session): session closed for user root
Jan 23 11:22:46 compute-0 sudo[46070]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lgcbfdrldrfueidfrmolefvpnupyxhjj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167366.636977-77-20033587224667/AnsiballZ_setup.py'
Jan 23 11:22:46 compute-0 sudo[46070]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:22:47 compute-0 python3.9[46072]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 23 11:22:47 compute-0 sudo[46070]: pam_unix(sudo:session): session closed for user root
Jan 23 11:22:47 compute-0 sudo[46241]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hubskttxdkloamjmwpqojgfnpmbvzpce ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167367.5391767-88-173529613536063/AnsiballZ_file.py'
Jan 23 11:22:47 compute-0 sudo[46241]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:22:48 compute-0 python3.9[46243]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:22:48 compute-0 sudo[46241]: pam_unix(sudo:session): session closed for user root
Jan 23 11:22:48 compute-0 sudo[46393]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vimzanapvssconvyvjtamuzlryrkqujj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167368.3202975-96-233551752530277/AnsiballZ_command.py'
Jan 23 11:22:48 compute-0 sudo[46393]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:22:48 compute-0 python3.9[46395]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 11:22:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-compat815430209-merged.mount: Deactivated successfully.
Jan 23 11:22:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-metacopy\x2dcheck3973000583-merged.mount: Deactivated successfully.
Jan 23 11:22:48 compute-0 podman[46396]: 2026-01-23 11:22:48.829132203 +0000 UTC m=+0.053707954 system refresh
Jan 23 11:22:48 compute-0 sudo[46393]: pam_unix(sudo:session): session closed for user root
Jan 23 11:22:49 compute-0 sudo[46556]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ynqwajdpoutyyzmqlymwaftzixaffpek ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167369.0103667-104-156578016588063/AnsiballZ_stat.py'
Jan 23 11:22:49 compute-0 sudo[46556]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:22:49 compute-0 python3.9[46558]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:22:49 compute-0 sudo[46556]: pam_unix(sudo:session): session closed for user root
Jan 23 11:22:49 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 23 11:22:50 compute-0 sudo[46679]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mkwvyfzzgqaovhpynfnezokyxubhvyxv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167369.0103667-104-156578016588063/AnsiballZ_copy.py'
Jan 23 11:22:50 compute-0 sudo[46679]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:22:50 compute-0 python3.9[46681]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769167369.0103667-104-156578016588063/.source.json follow=False _original_basename=podman_network_config.j2 checksum=eeb093c14a3690a8d6607d9f4b8205c976255bca backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:22:50 compute-0 sudo[46679]: pam_unix(sudo:session): session closed for user root
Jan 23 11:22:50 compute-0 sudo[46831]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hfjwpqsxgmtfyktipsnnmnvsfgrefuzj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167370.6898117-119-52420586267329/AnsiballZ_stat.py'
Jan 23 11:22:50 compute-0 sudo[46831]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:22:51 compute-0 python3.9[46833]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:22:51 compute-0 sudo[46831]: pam_unix(sudo:session): session closed for user root
Jan 23 11:22:51 compute-0 sudo[46954]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mwxqdxudhefuxpwzrsnklcoinjlnwgfm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167370.6898117-119-52420586267329/AnsiballZ_copy.py'
Jan 23 11:22:51 compute-0 sudo[46954]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:22:51 compute-0 python3.9[46956]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769167370.6898117-119-52420586267329/.source.conf follow=False _original_basename=registries.conf.j2 checksum=74ad3fdf1c9c551f4957cab58c04bb2f8b0dc3e4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 23 11:22:51 compute-0 sudo[46954]: pam_unix(sudo:session): session closed for user root
Jan 23 11:22:52 compute-0 sudo[47106]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mgbrghddnlmplkmjeyddqpwwbolfxbvd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167371.8222759-135-191391786999826/AnsiballZ_ini_file.py'
Jan 23 11:22:52 compute-0 sudo[47106]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:22:52 compute-0 python3.9[47108]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 23 11:22:52 compute-0 sudo[47106]: pam_unix(sudo:session): session closed for user root
Jan 23 11:22:52 compute-0 sudo[47258]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yrdelsfreltigrefugpblxpuidwxhrur ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167372.5385687-135-133194591042351/AnsiballZ_ini_file.py'
Jan 23 11:22:52 compute-0 sudo[47258]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:22:52 compute-0 python3.9[47260]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 23 11:22:53 compute-0 sudo[47258]: pam_unix(sudo:session): session closed for user root
Jan 23 11:22:53 compute-0 sudo[47410]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dwmvltvtacjtwgomdesjjzhmilyjexsk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167373.116068-135-91944723051812/AnsiballZ_ini_file.py'
Jan 23 11:22:53 compute-0 sudo[47410]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:22:53 compute-0 python3.9[47412]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 23 11:22:53 compute-0 sudo[47410]: pam_unix(sudo:session): session closed for user root
Jan 23 11:22:53 compute-0 sudo[47562]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tgvlbojbbanwaravzrlztrgmkawebyjb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167373.6809812-135-219157617898591/AnsiballZ_ini_file.py'
Jan 23 11:22:53 compute-0 sudo[47562]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:22:54 compute-0 python3.9[47564]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 23 11:22:54 compute-0 sudo[47562]: pam_unix(sudo:session): session closed for user root
Jan 23 11:22:55 compute-0 python3.9[47714]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 11:22:55 compute-0 sudo[47866]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rsjhblujwlkiryiqrhkkmfcqdhcmyftu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167375.2984083-175-41134236267527/AnsiballZ_dnf.py'
Jan 23 11:22:55 compute-0 sudo[47866]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:22:55 compute-0 python3.9[47868]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 23 11:22:57 compute-0 sudo[47866]: pam_unix(sudo:session): session closed for user root
Jan 23 11:22:57 compute-0 sudo[48019]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vxbvtdshpempdywayryuebtjvohjsuvk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167377.3927262-183-198015027537777/AnsiballZ_dnf.py'
Jan 23 11:22:57 compute-0 sudo[48019]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:22:57 compute-0 python3.9[48021]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openstack-network-scripts'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 23 11:22:59 compute-0 sudo[48019]: pam_unix(sudo:session): session closed for user root
Jan 23 11:23:00 compute-0 sudo[48180]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-znpcvgyaqlveeibisuznglxczmnexsbv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167380.0499609-193-189902944079344/AnsiballZ_dnf.py'
Jan 23 11:23:00 compute-0 sudo[48180]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:23:00 compute-0 python3.9[48182]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['podman', 'buildah'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 23 11:23:01 compute-0 sudo[48180]: pam_unix(sudo:session): session closed for user root
Jan 23 11:23:02 compute-0 sudo[48333]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cunqdvgjnrpjamlycrnkhqqoggpvetcw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167382.1007116-202-89102499179562/AnsiballZ_dnf.py'
Jan 23 11:23:02 compute-0 sudo[48333]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:23:02 compute-0 python3.9[48335]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['tuned', 'tuned-profiles-cpu-partitioning'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 23 11:23:04 compute-0 sudo[48333]: pam_unix(sudo:session): session closed for user root
Jan 23 11:23:04 compute-0 sudo[48486]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tcvgojoszuyuvtgefulflgbnvyrutrik ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167384.4137063-213-202409984996547/AnsiballZ_dnf.py'
Jan 23 11:23:04 compute-0 sudo[48486]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:23:04 compute-0 python3.9[48488]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['NetworkManager-ovs'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 23 11:23:06 compute-0 sudo[48486]: pam_unix(sudo:session): session closed for user root
Jan 23 11:23:07 compute-0 sudo[48642]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-czoixsmtizjdxonvleweavkgzlzelwmi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167386.8949776-221-200913319760216/AnsiballZ_dnf.py'
Jan 23 11:23:07 compute-0 sudo[48642]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:23:07 compute-0 python3.9[48644]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['os-net-config'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 23 11:23:10 compute-0 sudo[48642]: pam_unix(sudo:session): session closed for user root
Jan 23 11:23:11 compute-0 sudo[48812]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pnvzgkugsjajxmfsreykuoniajiwicpu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167391.0674465-230-16461443854354/AnsiballZ_dnf.py'
Jan 23 11:23:11 compute-0 sudo[48812]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:23:11 compute-0 python3.9[48814]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openssh-server'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 23 11:23:12 compute-0 sudo[48812]: pam_unix(sudo:session): session closed for user root
Jan 23 11:23:13 compute-0 sudo[48965]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pbeuluonalrxpatktrewndqnnqmrwlim ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167393.0169551-239-267461833433576/AnsiballZ_dnf.py'
Jan 23 11:23:13 compute-0 sudo[48965]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:23:13 compute-0 python3.9[48967]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 23 11:23:26 compute-0 sudo[48965]: pam_unix(sudo:session): session closed for user root
Jan 23 11:23:26 compute-0 sudo[49301]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wlubeiwpqsrfnbuzigkbqfaktjcdjnhs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167406.4634075-248-16824907436210/AnsiballZ_dnf.py'
Jan 23 11:23:26 compute-0 sudo[49301]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:23:26 compute-0 python3.9[49303]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['iscsi-initiator-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 23 11:23:28 compute-0 sudo[49301]: pam_unix(sudo:session): session closed for user root
Jan 23 11:23:28 compute-0 sudo[49457]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ezwhsczjbymmahbydgfzohuebijwkacu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167408.598593-258-159063661070487/AnsiballZ_dnf.py'
Jan 23 11:23:28 compute-0 sudo[49457]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:23:29 compute-0 python3.9[49459]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['device-mapper-multipath'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 23 11:23:30 compute-0 sudo[49457]: pam_unix(sudo:session): session closed for user root
Jan 23 11:23:31 compute-0 sudo[49614]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dmmmincwqaysddgfkryvuabgmefhwhvb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167411.2209904-269-84782475812251/AnsiballZ_file.py'
Jan 23 11:23:31 compute-0 sudo[49614]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:23:31 compute-0 python3.9[49616]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:23:31 compute-0 sudo[49614]: pam_unix(sudo:session): session closed for user root
Jan 23 11:23:32 compute-0 sudo[49789]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wyyluvgzpvgfmagepweajpleieritwob ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167411.8277001-277-153288500463748/AnsiballZ_stat.py'
Jan 23 11:23:32 compute-0 sudo[49789]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:23:32 compute-0 python3.9[49791]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:23:32 compute-0 sudo[49789]: pam_unix(sudo:session): session closed for user root
Jan 23 11:23:32 compute-0 sudo[49912]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-anhwraqlcggigjvpwwokwqubdydzwvno ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167411.8277001-277-153288500463748/AnsiballZ_copy.py'
Jan 23 11:23:32 compute-0 sudo[49912]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:23:32 compute-0 python3.9[49914]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1769167411.8277001-277-153288500463748/.source.json _original_basename=.en6kozj8 follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:23:32 compute-0 sudo[49912]: pam_unix(sudo:session): session closed for user root
Jan 23 11:23:33 compute-0 sudo[50064]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mauiibomtittjwkmqmabczkomcvclrqt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167413.0992286-295-141869806329635/AnsiballZ_podman_image.py'
Jan 23 11:23:33 compute-0 sudo[50064]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:23:33 compute-0 python3.9[50066]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Jan 23 11:23:33 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 23 11:23:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-compat1682027883-lower\x2dmapped.mount: Deactivated successfully.
Jan 23 11:23:40 compute-0 podman[50078]: 2026-01-23 11:23:40.820685832 +0000 UTC m=+7.031110732 image pull a17927617ef5a603f0594ee0d6df65aabdc9e0303ccc5a52c36f193de33ee0fe quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Jan 23 11:23:40 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 23 11:23:40 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 23 11:23:40 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 23 11:23:41 compute-0 sudo[50064]: pam_unix(sudo:session): session closed for user root
Jan 23 11:23:41 compute-0 sudo[50377]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ujeekvrmsimjysjmkntmpqyvfadbcckr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167421.3218253-306-183928048638890/AnsiballZ_podman_image.py'
Jan 23 11:23:41 compute-0 sudo[50377]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:23:41 compute-0 python3.9[50379]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Jan 23 11:23:47 compute-0 sshd-session[50455]: Invalid user sol from 193.32.162.146 port 33326
Jan 23 11:23:47 compute-0 sshd-session[50455]: Connection closed by invalid user sol 193.32.162.146 port 33326 [preauth]
Jan 23 11:23:54 compute-0 podman[50390]: 2026-01-23 11:23:54.580113011 +0000 UTC m=+12.785285578 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 23 11:23:54 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 23 11:23:54 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 23 11:23:54 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 23 11:23:54 compute-0 sudo[50377]: pam_unix(sudo:session): session closed for user root
Jan 23 11:23:55 compute-0 sudo[50688]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oisydoqsbzuvazybksymwkvihdekcolb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167435.070626-316-193748131329031/AnsiballZ_podman_image.py'
Jan 23 11:23:55 compute-0 sudo[50688]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:23:55 compute-0 python3.9[50690]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Jan 23 11:24:08 compute-0 podman[50702]: 2026-01-23 11:24:08.172592347 +0000 UTC m=+12.602197301 image pull e3166cc074f328e3b121ff82d56ed43a2542af699baffe6874520fe3837c2b18 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Jan 23 11:24:08 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 23 11:24:08 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 23 11:24:08 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 23 11:24:08 compute-0 sudo[50688]: pam_unix(sudo:session): session closed for user root
Jan 23 11:24:08 compute-0 sudo[50956]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-orvpqxnfpplcwahccdgmvraabebcfdsj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167448.7078075-327-255646361284358/AnsiballZ_podman_image.py'
Jan 23 11:24:08 compute-0 sudo[50956]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:24:09 compute-0 python3.9[50958]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Jan 23 11:24:09 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 23 11:24:26 compute-0 podman[50970]: 2026-01-23 11:24:26.214950733 +0000 UTC m=+16.977395105 image pull 673eb625b19e37533ec15e219000c7d8233802c3ffa5adfdd7e8765ce31baf5c quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested
Jan 23 11:24:26 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 23 11:24:26 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 23 11:24:26 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 23 11:24:26 compute-0 sudo[50956]: pam_unix(sudo:session): session closed for user root
Jan 23 11:24:27 compute-0 sudo[51310]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-suhurvfzdtakkitkskuemdfmmcztbidl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167466.812031-327-117248648717100/AnsiballZ_podman_image.py'
Jan 23 11:24:27 compute-0 sudo[51310]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:24:27 compute-0 python3.9[51312]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/prometheus/node-exporter:v1.5.0 tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Jan 23 11:24:27 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 23 11:24:28 compute-0 podman[51324]: 2026-01-23 11:24:28.565043104 +0000 UTC m=+1.142622145 image pull 0da6a335fe1356545476b749c68f022c897de3a2139e8f0054f6937349ee2b83 quay.io/prometheus/node-exporter:v1.5.0
Jan 23 11:24:28 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 23 11:24:28 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 23 11:24:28 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 23 11:24:28 compute-0 sudo[51310]: pam_unix(sudo:session): session closed for user root
Jan 23 11:24:29 compute-0 sudo[51596]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xomxjenvjexsuzyfhujwxlmbajxxgzcl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167469.160467-343-36921002166319/AnsiballZ_podman_image.py'
Jan 23 11:24:29 compute-0 sudo[51596]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:24:29 compute-0 python3.9[51598]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Jan 23 11:24:29 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 23 11:24:33 compute-0 podman[51611]: 2026-01-23 11:24:33.085754505 +0000 UTC m=+3.391132425 image pull a92f7bca491c0b0ce2687db04282e6791be0613adb46862c56450b0e1308679d quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified
Jan 23 11:24:33 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 23 11:24:33 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 23 11:24:33 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 23 11:24:33 compute-0 sudo[51596]: pam_unix(sudo:session): session closed for user root
Jan 23 11:24:33 compute-0 sudo[51867]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dlctcrqjvwrrtikvzchjmkjubrmysovs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167473.3960602-343-2889540537357/AnsiballZ_podman_image.py'
Jan 23 11:24:33 compute-0 sudo[51867]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:24:33 compute-0 python3.9[51869]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/sustainable_computing_io/kepler:release-0.7.12 tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Jan 23 11:24:42 compute-0 podman[51881]: 2026-01-23 11:24:42.742080536 +0000 UTC m=+8.845649480 image pull ed61e3ea3188391c18595d8ceada2a5a01f0ece915c62fde355798735b5208d7 quay.io/sustainable_computing_io/kepler:release-0.7.12
Jan 23 11:24:42 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 23 11:24:42 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 23 11:24:42 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 23 11:24:42 compute-0 sudo[51867]: pam_unix(sudo:session): session closed for user root
Jan 23 11:24:43 compute-0 sshd-session[45068]: Connection closed by 192.168.122.30 port 58626
Jan 23 11:24:43 compute-0 sshd-session[45065]: pam_unix(sshd:session): session closed for user zuul
Jan 23 11:24:43 compute-0 systemd[1]: session-11.scope: Deactivated successfully.
Jan 23 11:24:43 compute-0 systemd[1]: session-11.scope: Consumed 2min 18.408s CPU time.
Jan 23 11:24:43 compute-0 systemd-logind[798]: Session 11 logged out. Waiting for processes to exit.
Jan 23 11:24:43 compute-0 systemd-logind[798]: Removed session 11.
Jan 23 11:24:48 compute-0 sshd-session[52130]: Accepted publickey for zuul from 192.168.122.30 port 39528 ssh2: ECDSA SHA256:AUEDGm/wgPOySUg5KweIs4KJvJDZMkuE7T7y2BxO92Y
Jan 23 11:24:48 compute-0 systemd-logind[798]: New session 12 of user zuul.
Jan 23 11:24:48 compute-0 systemd[1]: Started Session 12 of User zuul.
Jan 23 11:24:48 compute-0 sshd-session[52130]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 23 11:24:49 compute-0 python3.9[52283]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 11:24:50 compute-0 sudo[52437]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pvvqpjqcdvkmijjnheobflrifixsawxa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167490.4480436-31-101868687496297/AnsiballZ_getent.py'
Jan 23 11:24:50 compute-0 sudo[52437]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:24:50 compute-0 python3.9[52439]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Jan 23 11:24:51 compute-0 sudo[52437]: pam_unix(sudo:session): session closed for user root
Jan 23 11:24:51 compute-0 sudo[52590]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fpxaxjwnzyhuwdywmtioustyniobzjik ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167491.173862-39-10564581864912/AnsiballZ_group.py'
Jan 23 11:24:51 compute-0 sudo[52590]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:24:51 compute-0 python3.9[52592]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 23 11:24:51 compute-0 groupadd[52593]: group added to /etc/group: name=openvswitch, GID=42476
Jan 23 11:24:51 compute-0 groupadd[52593]: group added to /etc/gshadow: name=openvswitch
Jan 23 11:24:51 compute-0 groupadd[52593]: new group: name=openvswitch, GID=42476
Jan 23 11:24:51 compute-0 sudo[52590]: pam_unix(sudo:session): session closed for user root
Jan 23 11:24:52 compute-0 sudo[52748]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-koserqiazwpfqnkpbzlzoyeycojbipql ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167492.1020672-47-128578701254037/AnsiballZ_user.py'
Jan 23 11:24:52 compute-0 sudo[52748]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:24:52 compute-0 python3.9[52750]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 23 11:24:52 compute-0 useradd[52752]: new user: name=openvswitch, UID=42476, GID=42476, home=/home/openvswitch, shell=/sbin/nologin, from=/dev/pts/0
Jan 23 11:24:52 compute-0 useradd[52752]: add 'openvswitch' to group 'hugetlbfs'
Jan 23 11:24:52 compute-0 useradd[52752]: add 'openvswitch' to shadow group 'hugetlbfs'
Jan 23 11:24:52 compute-0 sudo[52748]: pam_unix(sudo:session): session closed for user root
Jan 23 11:24:53 compute-0 sudo[52908]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fbwijtjofyivyxakyjpdxtmafvrmnjit ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167493.1380486-57-115700335758452/AnsiballZ_setup.py'
Jan 23 11:24:53 compute-0 sudo[52908]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:24:53 compute-0 python3.9[52910]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 23 11:24:53 compute-0 sudo[52908]: pam_unix(sudo:session): session closed for user root
Jan 23 11:24:54 compute-0 sudo[52992]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ryuuzuzzsohypahqpksuzptjrcduhxgy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167493.1380486-57-115700335758452/AnsiballZ_dnf.py'
Jan 23 11:24:54 compute-0 sudo[52992]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:24:54 compute-0 python3.9[52994]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 23 11:24:56 compute-0 sudo[52992]: pam_unix(sudo:session): session closed for user root
Jan 23 11:24:56 compute-0 sudo[53154]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ovwvhywuwylgvutrfnwgjnuultmwmjkp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167496.6380057-71-115036683297137/AnsiballZ_dnf.py'
Jan 23 11:24:56 compute-0 sudo[53154]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:24:57 compute-0 python3.9[53156]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 23 11:25:08 compute-0 kernel: SELinux:  Converting 2737 SID table entries...
Jan 23 11:25:08 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 23 11:25:08 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 23 11:25:08 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 23 11:25:08 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 23 11:25:08 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 23 11:25:08 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 23 11:25:08 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 23 11:25:08 compute-0 groupadd[53179]: group added to /etc/group: name=unbound, GID=994
Jan 23 11:25:08 compute-0 groupadd[53179]: group added to /etc/gshadow: name=unbound
Jan 23 11:25:08 compute-0 groupadd[53179]: new group: name=unbound, GID=994
Jan 23 11:25:08 compute-0 useradd[53186]: new user: name=unbound, UID=993, GID=994, home=/var/lib/unbound, shell=/sbin/nologin, from=none
Jan 23 11:25:08 compute-0 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=7 res=1
Jan 23 11:25:08 compute-0 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Jan 23 11:25:09 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 23 11:25:09 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 23 11:25:09 compute-0 systemd[1]: Reloading.
Jan 23 11:25:09 compute-0 systemd-sysv-generator[53687]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 11:25:09 compute-0 systemd-rc-local-generator[53684]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 11:25:09 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 23 11:25:10 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 23 11:25:10 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 23 11:25:10 compute-0 systemd[1]: run-rc6473f946da44d23a7adbd5973a6dd39.service: Deactivated successfully.
Jan 23 11:25:10 compute-0 sudo[53154]: pam_unix(sudo:session): session closed for user root
Jan 23 11:25:11 compute-0 sudo[54252]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-igwomomrztqngyzqiaakawedxerkptcj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167510.5806677-79-225328860347276/AnsiballZ_systemd.py'
Jan 23 11:25:11 compute-0 sudo[54252]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:25:11 compute-0 python3.9[54254]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 23 11:25:11 compute-0 systemd[1]: Reloading.
Jan 23 11:25:11 compute-0 systemd-rc-local-generator[54284]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 11:25:11 compute-0 systemd-sysv-generator[54288]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 11:25:11 compute-0 systemd[1]: Starting Open vSwitch Database Unit...
Jan 23 11:25:11 compute-0 chown[54295]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Jan 23 11:25:11 compute-0 ovs-ctl[54300]: /etc/openvswitch/conf.db does not exist ... (warning).
Jan 23 11:25:11 compute-0 ovs-ctl[54300]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Jan 23 11:25:11 compute-0 ovs-ctl[54300]: Starting ovsdb-server [  OK  ]
Jan 23 11:25:11 compute-0 ovs-vsctl[54349]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Jan 23 11:25:12 compute-0 ovs-vsctl[54369]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"9a136bfd-345f-428f-a7f6-d55531120214\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Jan 23 11:25:12 compute-0 ovs-ctl[54300]: Configuring Open vSwitch system IDs [  OK  ]
Jan 23 11:25:12 compute-0 ovs-vsctl[54375]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Jan 23 11:25:12 compute-0 ovs-ctl[54300]: Enabling remote OVSDB managers [  OK  ]
Jan 23 11:25:12 compute-0 systemd[1]: Started Open vSwitch Database Unit.
Jan 23 11:25:12 compute-0 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Jan 23 11:25:12 compute-0 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Jan 23 11:25:12 compute-0 systemd[1]: Starting Open vSwitch Forwarding Unit...
Jan 23 11:25:12 compute-0 kernel: openvswitch: Open vSwitch switching datapath
Jan 23 11:25:12 compute-0 ovs-ctl[54419]: Inserting openvswitch module [  OK  ]
Jan 23 11:25:12 compute-0 ovs-ctl[54388]: Starting ovs-vswitchd [  OK  ]
Jan 23 11:25:12 compute-0 ovs-vsctl[54437]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Jan 23 11:25:12 compute-0 ovs-ctl[54388]: Enabling remote OVSDB managers [  OK  ]
Jan 23 11:25:12 compute-0 systemd[1]: Started Open vSwitch Forwarding Unit.
Jan 23 11:25:12 compute-0 systemd[1]: Starting Open vSwitch...
Jan 23 11:25:12 compute-0 systemd[1]: Finished Open vSwitch.
Jan 23 11:25:12 compute-0 sudo[54252]: pam_unix(sudo:session): session closed for user root
Jan 23 11:25:13 compute-0 python3.9[54589]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 11:25:13 compute-0 sudo[54739]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pegnzastwaybnnfalbyhmgrqhugfxfrh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167513.3833005-97-58995867218262/AnsiballZ_sefcontext.py'
Jan 23 11:25:13 compute-0 sudo[54739]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:25:14 compute-0 python3.9[54741]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Jan 23 11:25:15 compute-0 kernel: SELinux:  Converting 2751 SID table entries...
Jan 23 11:25:15 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 23 11:25:15 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 23 11:25:15 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 23 11:25:15 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 23 11:25:15 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 23 11:25:15 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 23 11:25:15 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 23 11:25:15 compute-0 sudo[54739]: pam_unix(sudo:session): session closed for user root
Jan 23 11:25:16 compute-0 python3.9[54896]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 11:25:16 compute-0 sudo[55052]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-izhclrpqjubrhesaaxhiizdfuezkppee ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167516.4867837-115-4826690764508/AnsiballZ_dnf.py'
Jan 23 11:25:16 compute-0 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Jan 23 11:25:16 compute-0 sudo[55052]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:25:16 compute-0 python3.9[55054]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 23 11:25:18 compute-0 sudo[55052]: pam_unix(sudo:session): session closed for user root
Jan 23 11:25:18 compute-0 sudo[55205]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dpqxajsukkgguzcdihprdczdhfjwpads ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167518.3633738-123-7124354209660/AnsiballZ_command.py'
Jan 23 11:25:18 compute-0 sudo[55205]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:25:18 compute-0 python3.9[55207]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 11:25:19 compute-0 sudo[55205]: pam_unix(sudo:session): session closed for user root
Jan 23 11:25:20 compute-0 sudo[55492]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qoerxbpozezzmcqlqbrnecxdkybdbwvu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167519.7647834-131-82354901379388/AnsiballZ_file.py'
Jan 23 11:25:20 compute-0 sudo[55492]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:25:20 compute-0 python3.9[55494]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None attributes=None
Jan 23 11:25:20 compute-0 sudo[55492]: pam_unix(sudo:session): session closed for user root
Jan 23 11:25:21 compute-0 python3.9[55644]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 11:25:21 compute-0 sudo[55796]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ljfrreyedfdogbjyatxhsejaxraylqxr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167521.2423022-147-57665996402211/AnsiballZ_dnf.py'
Jan 23 11:25:21 compute-0 sudo[55796]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:25:21 compute-0 python3.9[55798]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 23 11:25:23 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 23 11:25:23 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 23 11:25:23 compute-0 systemd[1]: Reloading.
Jan 23 11:25:24 compute-0 systemd-rc-local-generator[55836]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 11:25:24 compute-0 systemd-sysv-generator[55839]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 11:25:24 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 23 11:25:24 compute-0 sudo[55796]: pam_unix(sudo:session): session closed for user root
Jan 23 11:25:25 compute-0 sudo[56112]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rxlajhinqyqictxvesqoflqaowjvlwql ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167525.1427114-155-265452240669662/AnsiballZ_systemd.py'
Jan 23 11:25:25 compute-0 sudo[56112]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:25:25 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 23 11:25:25 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 23 11:25:25 compute-0 systemd[1]: run-rfd6d138eba534ceba43fc9b243bca04e.service: Deactivated successfully.
Jan 23 11:25:25 compute-0 python3.9[56114]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 23 11:25:25 compute-0 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Jan 23 11:25:25 compute-0 systemd[1]: Stopped Network Manager Wait Online.
Jan 23 11:25:25 compute-0 systemd[1]: Stopping Network Manager Wait Online...
Jan 23 11:25:25 compute-0 systemd[1]: Stopping Network Manager...
Jan 23 11:25:25 compute-0 NetworkManager[7190]: <info>  [1769167525.7328] caught SIGTERM, shutting down normally.
Jan 23 11:25:25 compute-0 NetworkManager[7190]: <info>  [1769167525.7339] dhcp4 (eth0): canceled DHCP transaction
Jan 23 11:25:25 compute-0 NetworkManager[7190]: <info>  [1769167525.7340] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 23 11:25:25 compute-0 NetworkManager[7190]: <info>  [1769167525.7340] dhcp4 (eth0): state changed no lease
Jan 23 11:25:25 compute-0 NetworkManager[7190]: <info>  [1769167525.7342] manager: NetworkManager state is now CONNECTED_SITE
Jan 23 11:25:25 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 23 11:25:25 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 23 11:25:25 compute-0 NetworkManager[7190]: <info>  [1769167525.7767] exiting (success)
Jan 23 11:25:25 compute-0 systemd[1]: NetworkManager.service: Deactivated successfully.
Jan 23 11:25:25 compute-0 systemd[1]: Stopped Network Manager.
Jan 23 11:25:25 compute-0 systemd[1]: NetworkManager.service: Consumed 13.568s CPU time, 4.3M memory peak, read 0B from disk, written 28.5K to disk.
Jan 23 11:25:25 compute-0 systemd[1]: Starting Network Manager...
Jan 23 11:25:25 compute-0 NetworkManager[56133]: <info>  [1769167525.8509] NetworkManager (version 1.54.3-2.el9) is starting... (after a restart, boot:99274698-eb02-4e43-8d1b-7c4762b80d7f)
Jan 23 11:25:25 compute-0 NetworkManager[56133]: <info>  [1769167525.8511] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Jan 23 11:25:25 compute-0 NetworkManager[56133]: <info>  [1769167525.8564] manager[0x556fc9165000]: monitoring kernel firmware directory '/lib/firmware'.
Jan 23 11:25:25 compute-0 systemd[1]: Starting Hostname Service...
Jan 23 11:25:25 compute-0 systemd[1]: Started Hostname Service.
Jan 23 11:25:25 compute-0 NetworkManager[56133]: <info>  [1769167525.9317] hostname: hostname: using hostnamed
Jan 23 11:25:25 compute-0 NetworkManager[56133]: <info>  [1769167525.9318] hostname: static hostname changed from (none) to "compute-0"
Jan 23 11:25:25 compute-0 NetworkManager[56133]: <info>  [1769167525.9322] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Jan 23 11:25:25 compute-0 NetworkManager[56133]: <info>  [1769167525.9326] manager[0x556fc9165000]: rfkill: Wi-Fi hardware radio set enabled
Jan 23 11:25:25 compute-0 NetworkManager[56133]: <info>  [1769167525.9326] manager[0x556fc9165000]: rfkill: WWAN hardware radio set enabled
Jan 23 11:25:25 compute-0 NetworkManager[56133]: <info>  [1769167525.9345] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-ovs.so)
Jan 23 11:25:25 compute-0 NetworkManager[56133]: <info>  [1769167525.9352] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Jan 23 11:25:25 compute-0 NetworkManager[56133]: <info>  [1769167525.9353] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Jan 23 11:25:25 compute-0 NetworkManager[56133]: <info>  [1769167525.9353] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Jan 23 11:25:25 compute-0 NetworkManager[56133]: <info>  [1769167525.9354] manager: Networking is enabled by state file
Jan 23 11:25:25 compute-0 NetworkManager[56133]: <info>  [1769167525.9355] settings: Loaded settings plugin: keyfile (internal)
Jan 23 11:25:25 compute-0 NetworkManager[56133]: <info>  [1769167525.9358] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Jan 23 11:25:25 compute-0 NetworkManager[56133]: <info>  [1769167525.9384] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Jan 23 11:25:25 compute-0 NetworkManager[56133]: <info>  [1769167525.9392] dhcp: init: Using DHCP client 'internal'
Jan 23 11:25:25 compute-0 NetworkManager[56133]: <info>  [1769167525.9395] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Jan 23 11:25:25 compute-0 NetworkManager[56133]: <info>  [1769167525.9399] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 23 11:25:25 compute-0 NetworkManager[56133]: <info>  [1769167525.9405] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Jan 23 11:25:25 compute-0 NetworkManager[56133]: <info>  [1769167525.9414] device (lo): Activation: starting connection 'lo' (db09884b-81ca-48ab-b981-7d9244c8c055)
Jan 23 11:25:25 compute-0 NetworkManager[56133]: <info>  [1769167525.9419] device (eth0): carrier: link connected
Jan 23 11:25:25 compute-0 NetworkManager[56133]: <info>  [1769167525.9423] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Jan 23 11:25:25 compute-0 NetworkManager[56133]: <info>  [1769167525.9427] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Jan 23 11:25:25 compute-0 NetworkManager[56133]: <info>  [1769167525.9427] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 23 11:25:25 compute-0 NetworkManager[56133]: <info>  [1769167525.9432] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 23 11:25:25 compute-0 NetworkManager[56133]: <info>  [1769167525.9436] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 23 11:25:25 compute-0 NetworkManager[56133]: <info>  [1769167525.9440] device (eth1): carrier: link connected
Jan 23 11:25:25 compute-0 NetworkManager[56133]: <info>  [1769167525.9444] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Jan 23 11:25:25 compute-0 NetworkManager[56133]: <info>  [1769167525.9447] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (b5bacad9-2802-5eea-a61e-31e1674ecfc2) (indicated)
Jan 23 11:25:25 compute-0 NetworkManager[56133]: <info>  [1769167525.9447] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 23 11:25:25 compute-0 NetworkManager[56133]: <info>  [1769167525.9450] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 23 11:25:25 compute-0 NetworkManager[56133]: <info>  [1769167525.9455] device (eth1): Activation: starting connection 'ci-private-network' (b5bacad9-2802-5eea-a61e-31e1674ecfc2)
Jan 23 11:25:25 compute-0 systemd[1]: Started Network Manager.
Jan 23 11:25:25 compute-0 NetworkManager[56133]: <info>  [1769167525.9463] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Jan 23 11:25:25 compute-0 NetworkManager[56133]: <info>  [1769167525.9468] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Jan 23 11:25:25 compute-0 NetworkManager[56133]: <info>  [1769167525.9470] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Jan 23 11:25:25 compute-0 NetworkManager[56133]: <info>  [1769167525.9471] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Jan 23 11:25:25 compute-0 NetworkManager[56133]: <info>  [1769167525.9472] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 23 11:25:25 compute-0 NetworkManager[56133]: <info>  [1769167525.9475] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 23 11:25:25 compute-0 NetworkManager[56133]: <info>  [1769167525.9476] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 23 11:25:25 compute-0 NetworkManager[56133]: <info>  [1769167525.9477] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 23 11:25:25 compute-0 NetworkManager[56133]: <info>  [1769167525.9479] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Jan 23 11:25:25 compute-0 NetworkManager[56133]: <info>  [1769167525.9484] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 23 11:25:25 compute-0 NetworkManager[56133]: <info>  [1769167525.9486] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 23 11:25:25 compute-0 NetworkManager[56133]: <info>  [1769167525.9492] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 23 11:25:25 compute-0 NetworkManager[56133]: <info>  [1769167525.9501] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 23 11:25:25 compute-0 NetworkManager[56133]: <info>  [1769167525.9511] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Jan 23 11:25:25 compute-0 NetworkManager[56133]: <info>  [1769167525.9514] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Jan 23 11:25:25 compute-0 NetworkManager[56133]: <info>  [1769167525.9527] device (lo): Activation: successful, device activated.
Jan 23 11:25:25 compute-0 NetworkManager[56133]: <info>  [1769167525.9535] dhcp4 (eth0): state changed new lease, address=38.102.83.107
Jan 23 11:25:25 compute-0 NetworkManager[56133]: <info>  [1769167525.9542] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Jan 23 11:25:25 compute-0 systemd[1]: Starting Network Manager Wait Online...
Jan 23 11:25:25 compute-0 sudo[56112]: pam_unix(sudo:session): session closed for user root
Jan 23 11:25:26 compute-0 NetworkManager[56133]: <info>  [1769167526.0336] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 23 11:25:26 compute-0 NetworkManager[56133]: <info>  [1769167526.0349] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 23 11:25:26 compute-0 NetworkManager[56133]: <info>  [1769167526.0357] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 23 11:25:26 compute-0 NetworkManager[56133]: <info>  [1769167526.0360] manager: NetworkManager state is now CONNECTED_LOCAL
Jan 23 11:25:26 compute-0 NetworkManager[56133]: <info>  [1769167526.0361] device (eth1): Activation: successful, device activated.
Jan 23 11:25:26 compute-0 NetworkManager[56133]: <info>  [1769167526.0414] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 23 11:25:26 compute-0 NetworkManager[56133]: <info>  [1769167526.0417] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 23 11:25:26 compute-0 NetworkManager[56133]: <info>  [1769167526.0422] manager: NetworkManager state is now CONNECTED_SITE
Jan 23 11:25:26 compute-0 NetworkManager[56133]: <info>  [1769167526.0428] device (eth0): Activation: successful, device activated.
Jan 23 11:25:26 compute-0 NetworkManager[56133]: <info>  [1769167526.0434] manager: NetworkManager state is now CONNECTED_GLOBAL
Jan 23 11:25:26 compute-0 NetworkManager[56133]: <info>  [1769167526.0439] manager: startup complete
Jan 23 11:25:26 compute-0 systemd[1]: Finished Network Manager Wait Online.
Jan 23 11:25:26 compute-0 sudo[56340]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zrdxcnfvtalmzvkdulyizvssaohbxdzn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167526.1592722-163-216232087522033/AnsiballZ_dnf.py'
Jan 23 11:25:26 compute-0 sudo[56340]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:25:26 compute-0 python3.9[56342]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 23 11:25:33 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 23 11:25:33 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 23 11:25:33 compute-0 systemd[1]: Reloading.
Jan 23 11:25:33 compute-0 systemd-rc-local-generator[56397]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 11:25:33 compute-0 systemd-sysv-generator[56400]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 11:25:33 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 23 11:25:35 compute-0 sudo[56340]: pam_unix(sudo:session): session closed for user root
Jan 23 11:25:35 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 23 11:25:35 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 23 11:25:35 compute-0 systemd[1]: run-r7fd3f402ca51438ab857ffa19828c2f2.service: Deactivated successfully.
Jan 23 11:25:36 compute-0 sudo[56805]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hokeadmyafvmujptyhpormcbulfnoqcb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167535.7382152-175-187439823375718/AnsiballZ_stat.py'
Jan 23 11:25:36 compute-0 sudo[56805]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:25:36 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 23 11:25:36 compute-0 python3.9[56807]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 11:25:36 compute-0 sudo[56805]: pam_unix(sudo:session): session closed for user root
Jan 23 11:25:36 compute-0 sudo[56957]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tmrynfdsaptyybnwvbgyrbpplbomwlvt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167536.358357-184-179509664856516/AnsiballZ_ini_file.py'
Jan 23 11:25:36 compute-0 sudo[56957]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:25:36 compute-0 python3.9[56959]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:25:36 compute-0 sudo[56957]: pam_unix(sudo:session): session closed for user root
Jan 23 11:25:38 compute-0 sudo[57111]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fracafvhnynhcxixdikrnwxmpypkipqx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167537.191065-194-166690723341107/AnsiballZ_ini_file.py'
Jan 23 11:25:38 compute-0 sudo[57111]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:25:38 compute-0 python3.9[57113]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:25:38 compute-0 sudo[57111]: pam_unix(sudo:session): session closed for user root
Jan 23 11:25:38 compute-0 sudo[57263]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jqhmczimyopfegnykfahcykrjmwoozix ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167538.5278802-194-173820527739443/AnsiballZ_ini_file.py'
Jan 23 11:25:38 compute-0 sudo[57263]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:25:38 compute-0 python3.9[57265]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:25:38 compute-0 sudo[57263]: pam_unix(sudo:session): session closed for user root
Jan 23 11:25:39 compute-0 sudo[57415]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dxbbjoxikmktimoissbqeqmqrjrdczkg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167539.1287944-209-250531269222984/AnsiballZ_ini_file.py'
Jan 23 11:25:39 compute-0 sudo[57415]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:25:39 compute-0 python3.9[57417]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:25:39 compute-0 sudo[57415]: pam_unix(sudo:session): session closed for user root
Jan 23 11:25:39 compute-0 sudo[57567]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mtxyxnisqapfichbutoovitcduasqkww ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167539.7019742-209-119608454629184/AnsiballZ_ini_file.py'
Jan 23 11:25:39 compute-0 sudo[57567]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:25:40 compute-0 python3.9[57569]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:25:40 compute-0 sudo[57567]: pam_unix(sudo:session): session closed for user root
Jan 23 11:25:40 compute-0 sudo[57719]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pyqebzbacyolopvjbzypqbgyttajxoml ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167540.4157991-224-202471455066876/AnsiballZ_stat.py'
Jan 23 11:25:40 compute-0 sudo[57719]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:25:40 compute-0 python3.9[57721]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:25:40 compute-0 sudo[57719]: pam_unix(sudo:session): session closed for user root
Jan 23 11:25:41 compute-0 sudo[57842]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-stiapflirpunzxjaurlzvjgqnycxenhz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167540.4157991-224-202471455066876/AnsiballZ_copy.py'
Jan 23 11:25:41 compute-0 sudo[57842]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:25:41 compute-0 python3.9[57844]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1769167540.4157991-224-202471455066876/.source _original_basename=.g6pd6apg follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:25:41 compute-0 sudo[57842]: pam_unix(sudo:session): session closed for user root
Jan 23 11:25:42 compute-0 sudo[57994]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhqdxybvgizexpxqxmosdinrlsslivmx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167541.8426807-239-112116201013632/AnsiballZ_file.py'
Jan 23 11:25:42 compute-0 sudo[57994]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:25:42 compute-0 python3.9[57996]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:25:42 compute-0 sudo[57994]: pam_unix(sudo:session): session closed for user root
Jan 23 11:25:42 compute-0 sudo[58146]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wtdjshlnpklagoppysmypxmrrvwnsfyb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167542.4703588-247-141323721024854/AnsiballZ_edpm_os_net_config_mappings.py'
Jan 23 11:25:42 compute-0 sudo[58146]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:25:43 compute-0 python3.9[58148]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Jan 23 11:25:43 compute-0 sudo[58146]: pam_unix(sudo:session): session closed for user root
Jan 23 11:25:43 compute-0 sudo[58298]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tfknmgenffusqutwugbcewektzjfywjh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167543.3412387-256-164621374534708/AnsiballZ_file.py'
Jan 23 11:25:43 compute-0 sudo[58298]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:25:43 compute-0 python3.9[58300]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:25:43 compute-0 sudo[58298]: pam_unix(sudo:session): session closed for user root
Jan 23 11:25:44 compute-0 sudo[58450]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-osquewrpocbtxngrgnuemrqhhhveuaia ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167544.18768-266-140499347139248/AnsiballZ_stat.py'
Jan 23 11:25:44 compute-0 sudo[58450]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:25:44 compute-0 sudo[58450]: pam_unix(sudo:session): session closed for user root
Jan 23 11:25:44 compute-0 sudo[58573]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yxschvkxextsposjaawvlhdiylfvpzma ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167544.18768-266-140499347139248/AnsiballZ_copy.py'
Jan 23 11:25:44 compute-0 sudo[58573]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:25:45 compute-0 sudo[58573]: pam_unix(sudo:session): session closed for user root
Jan 23 11:25:45 compute-0 sudo[58725]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-flmkqraldrmklwjbxyiimfjxzidduvcw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167545.34611-281-18196202002962/AnsiballZ_slurp.py'
Jan 23 11:25:45 compute-0 sudo[58725]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:25:45 compute-0 python3.9[58727]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Jan 23 11:25:46 compute-0 sudo[58725]: pam_unix(sudo:session): session closed for user root
Jan 23 11:25:46 compute-0 sudo[58900]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dyxpvaumtunrailruyyxjdzaxsptfygy ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167546.2062578-290-235622460268176/async_wrapper.py j85645768689 300 /home/zuul/.ansible/tmp/ansible-tmp-1769167546.2062578-290-235622460268176/AnsiballZ_edpm_os_net_config.py _'
Jan 23 11:25:46 compute-0 sudo[58900]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:25:46 compute-0 ansible-async_wrapper.py[58902]: Invoked with j85645768689 300 /home/zuul/.ansible/tmp/ansible-tmp-1769167546.2062578-290-235622460268176/AnsiballZ_edpm_os_net_config.py _
Jan 23 11:25:46 compute-0 ansible-async_wrapper.py[58905]: Starting module and watcher
Jan 23 11:25:46 compute-0 ansible-async_wrapper.py[58905]: Start watching 58906 (300)
Jan 23 11:25:46 compute-0 ansible-async_wrapper.py[58906]: Start module (58906)
Jan 23 11:25:46 compute-0 ansible-async_wrapper.py[58902]: Return async_wrapper task started.
Jan 23 11:25:47 compute-0 sudo[58900]: pam_unix(sudo:session): session closed for user root
Jan 23 11:25:47 compute-0 python3.9[58907]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Jan 23 11:25:47 compute-0 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Jan 23 11:25:47 compute-0 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Jan 23 11:25:47 compute-0 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Jan 23 11:25:47 compute-0 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Jan 23 11:25:47 compute-0 kernel: cfg80211: failed to load regulatory.db
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.7748] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=58908 uid=0 result="success"
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.7761] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=58908 uid=0 result="success"
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.8299] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.8301] audit: op="connection-add" uuid="1307691d-2d32-49c5-96ec-10253ed0473c" name="br-ex-br" pid=58908 uid=0 result="success"
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.8323] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.8325] audit: op="connection-add" uuid="be18ced2-c4d1-4dae-aeba-7f37050ac598" name="br-ex-port" pid=58908 uid=0 result="success"
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.8338] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.8340] audit: op="connection-add" uuid="3e3f64d1-fdb6-42f5-bfb7-171a779ec993" name="eth1-port" pid=58908 uid=0 result="success"
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.8354] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.8356] audit: op="connection-add" uuid="049f83c6-1562-4cc0-9c83-a4bf3987ee5f" name="vlan20-port" pid=58908 uid=0 result="success"
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.8369] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.8371] audit: op="connection-add" uuid="024013f6-71d7-49e1-b466-04976e060632" name="vlan21-port" pid=58908 uid=0 result="success"
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.8384] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.8386] audit: op="connection-add" uuid="ab521014-bf46-4386-bd9b-712872f33475" name="vlan22-port" pid=58908 uid=0 result="success"
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.8410] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="ipv6.addr-gen-mode,ipv6.method,ipv6.dhcp-timeout,ipv4.dhcp-client-id,ipv4.dhcp-timeout,connection.timestamp,connection.autoconnect-priority,802-3-ethernet.mtu" pid=58908 uid=0 result="success"
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.8431] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/10)
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.8433] audit: op="connection-add" uuid="ff2bba80-f7ee-471e-9122-b67ec78cb1c1" name="br-ex-if" pid=58908 uid=0 result="success"
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.8572] audit: op="connection-update" uuid="b5bacad9-2802-5eea-a61e-31e1674ecfc2" name="ci-private-network" args="ipv6.dns,ipv6.addr-gen-mode,ipv6.method,ipv6.addresses,ipv6.routing-rules,ipv6.routes,ipv4.dns,ipv4.method,ipv4.routes,ipv4.addresses,ipv4.never-default,ipv4.routing-rules,connection.controller,connection.slave-type,connection.master,connection.timestamp,connection.port-type,ovs-external-ids.data,ovs-interface.type" pid=58908 uid=0 result="success"
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.8597] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.8599] audit: op="connection-add" uuid="c5097033-82d9-45b2-a05d-877298f938b8" name="vlan20-if" pid=58908 uid=0 result="success"
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.8620] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.8622] audit: op="connection-add" uuid="68062aee-7171-4939-bf89-6fbf6f78ac21" name="vlan21-if" pid=58908 uid=0 result="success"
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.8641] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.8644] audit: op="connection-add" uuid="df06288f-0cbf-4012-b957-5f56a391fd13" name="vlan22-if" pid=58908 uid=0 result="success"
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.8657] audit: op="connection-delete" uuid="15b44493-7c24-36e8-977a-5ba5b78aa3d2" name="Wired connection 1" pid=58908 uid=0 result="success"
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.8671] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <warn>  [1769167548.8675] device (br-ex)[Open vSwitch Bridge]: error setting IPv4 forwarding to '1': Success
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.8682] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.8687] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (1307691d-2d32-49c5-96ec-10253ed0473c)
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.8688] audit: op="connection-activate" uuid="1307691d-2d32-49c5-96ec-10253ed0473c" name="br-ex-br" pid=58908 uid=0 result="success"
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.8691] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <warn>  [1769167548.8692] device (br-ex)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.8698] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.8703] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (be18ced2-c4d1-4dae-aeba-7f37050ac598)
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.8705] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <warn>  [1769167548.8707] device (eth1)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.8711] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.8716] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (3e3f64d1-fdb6-42f5-bfb7-171a779ec993)
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.8718] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <warn>  [1769167548.8719] device (vlan20)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.8726] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.8730] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (049f83c6-1562-4cc0-9c83-a4bf3987ee5f)
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.8733] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <warn>  [1769167548.8734] device (vlan21)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.8741] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.8747] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (024013f6-71d7-49e1-b466-04976e060632)
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.8749] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <warn>  [1769167548.8750] device (vlan22)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.8755] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.8760] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (ab521014-bf46-4386-bd9b-712872f33475)
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.8761] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.8764] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.8766] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.8772] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <warn>  [1769167548.8774] device (br-ex)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.8777] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.8782] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (ff2bba80-f7ee-471e-9122-b67ec78cb1c1)
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.8783] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.8788] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.8790] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.8792] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.8793] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.8806] device (eth1): disconnecting for new activation request.
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.8814] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.8817] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.8819] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.8821] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.8824] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <warn>  [1769167548.8826] device (vlan20)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.8829] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.8835] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (c5097033-82d9-45b2-a05d-877298f938b8)
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.8835] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.8839] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.8841] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.8844] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.8849] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <warn>  [1769167548.8850] device (vlan21)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.8854] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.8859] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (68062aee-7171-4939-bf89-6fbf6f78ac21)
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.8860] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.8865] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.8867] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.8869] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.8873] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <warn>  [1769167548.8875] device (vlan22)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.8879] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.8885] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (df06288f-0cbf-4012-b957-5f56a391fd13)
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.8886] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.8890] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.8893] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.8894] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.8897] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.8912] audit: op="device-reapply" interface="eth0" ifindex=2 args="ipv6.addr-gen-mode,ipv6.method,ipv4.dhcp-client-id,ipv4.dhcp-timeout,connection.autoconnect-priority,802-3-ethernet.mtu" pid=58908 uid=0 result="success"
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.8914] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.8919] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.8921] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.8930] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.8935] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.8939] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 23 11:25:48 compute-0 kernel: ovs-system: entered promiscuous mode
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.8955] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.8957] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 23 11:25:48 compute-0 systemd-udevd[58912]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 11:25:48 compute-0 kernel: Timeout policy base is empty
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.8967] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.8972] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.8975] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.8977] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 23 11:25:48 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.8982] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.8986] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.8990] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.8992] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.8998] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.9004] dhcp4 (eth0): canceled DHCP transaction
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.9004] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.9004] dhcp4 (eth0): state changed no lease
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.9006] dhcp4 (eth0): activation: beginning transaction (no timeout)
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.9017] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.9021] audit: op="device-reapply" interface="eth1" ifindex=3 pid=58908 uid=0 result="fail" reason="Device is not activated"
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.9029] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Jan 23 11:25:48 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.9106] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.9110] dhcp4 (eth0): state changed new lease, address=38.102.83.107
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.9118] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Jan 23 11:25:48 compute-0 kernel: br-ex: entered promiscuous mode
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.9339] device (eth1): disconnecting for new activation request.
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.9341] audit: op="connection-activate" uuid="b5bacad9-2802-5eea-a61e-31e1674ecfc2" name="ci-private-network" pid=58908 uid=0 result="success"
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.9342] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Jan 23 11:25:48 compute-0 kernel: vlan22: entered promiscuous mode
Jan 23 11:25:48 compute-0 systemd-udevd[58913]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 11:25:48 compute-0 kernel: vlan21: entered promiscuous mode
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.9429] device (eth1): Activation: starting connection 'ci-private-network' (b5bacad9-2802-5eea-a61e-31e1674ecfc2)
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.9453] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.9459] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.9460] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.9464] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.9472] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.9473] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.9475] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.9476] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.9478] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.9479] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 23 11:25:48 compute-0 kernel: vlan20: entered promiscuous mode
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.9482] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=58908 uid=0 result="success"
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.9489] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.9495] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.9499] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.9504] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.9509] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Jan 23 11:25:48 compute-0 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.9512] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.9515] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.9519] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.9523] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.9527] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.9531] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.9535] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.9538] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.9552] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.9559] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.9566] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.9577] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.9582] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.9595] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.9597] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.9604] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.9608] device (eth1): Activation: successful, device activated.
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.9619] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.9620] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.9621] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.9624] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.9628] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.9632] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.9636] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.9640] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.9647] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.9651] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 23 11:25:48 compute-0 NetworkManager[56133]: <info>  [1769167548.9667] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 23 11:25:49 compute-0 NetworkManager[56133]: <info>  [1769167549.0319] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 23 11:25:49 compute-0 NetworkManager[56133]: <info>  [1769167549.0321] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 23 11:25:49 compute-0 NetworkManager[56133]: <info>  [1769167549.0327] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 23 11:25:50 compute-0 NetworkManager[56133]: <info>  [1769167550.1625] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=58908 uid=0 result="success"
Jan 23 11:25:50 compute-0 NetworkManager[56133]: <info>  [1769167550.2822] checkpoint[0x556fc9139950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Jan 23 11:25:50 compute-0 NetworkManager[56133]: <info>  [1769167550.2826] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=58908 uid=0 result="success"
Jan 23 11:25:50 compute-0 NetworkManager[56133]: <info>  [1769167550.5312] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=58908 uid=0 result="success"
Jan 23 11:25:50 compute-0 NetworkManager[56133]: <info>  [1769167550.5323] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=58908 uid=0 result="success"
Jan 23 11:25:50 compute-0 sudo[59240]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rrkihuzjynfbdqeuyblmnynpbsefxyab ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167550.1370523-290-168625630009957/AnsiballZ_async_status.py'
Jan 23 11:25:50 compute-0 sudo[59240]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:25:50 compute-0 NetworkManager[56133]: <info>  [1769167550.7058] audit: op="networking-control" arg="global-dns-configuration" pid=58908 uid=0 result="success"
Jan 23 11:25:50 compute-0 NetworkManager[56133]: <info>  [1769167550.7098] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Jan 23 11:25:50 compute-0 NetworkManager[56133]: <info>  [1769167550.7136] audit: op="networking-control" arg="global-dns-configuration" pid=58908 uid=0 result="success"
Jan 23 11:25:50 compute-0 NetworkManager[56133]: <info>  [1769167550.7158] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=58908 uid=0 result="success"
Jan 23 11:25:50 compute-0 python3.9[59242]: ansible-ansible.legacy.async_status Invoked with jid=j85645768689.58902 mode=status _async_dir=/root/.ansible_async
Jan 23 11:25:50 compute-0 sudo[59240]: pam_unix(sudo:session): session closed for user root
Jan 23 11:25:50 compute-0 NetworkManager[56133]: <info>  [1769167550.8309] checkpoint[0x556fc9139a20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Jan 23 11:25:50 compute-0 NetworkManager[56133]: <info>  [1769167550.8312] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=58908 uid=0 result="success"
Jan 23 11:25:50 compute-0 ansible-async_wrapper.py[58906]: Module complete (58906)
Jan 23 11:25:51 compute-0 ansible-async_wrapper.py[58905]: Done in kid B.
Jan 23 11:25:54 compute-0 sudo[59344]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hcfuqnasfbwjriizxlxpyrkgwbbikrhl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167550.1370523-290-168625630009957/AnsiballZ_async_status.py'
Jan 23 11:25:54 compute-0 sudo[59344]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:25:54 compute-0 python3.9[59346]: ansible-ansible.legacy.async_status Invoked with jid=j85645768689.58902 mode=status _async_dir=/root/.ansible_async
Jan 23 11:25:54 compute-0 sudo[59344]: pam_unix(sudo:session): session closed for user root
Jan 23 11:25:54 compute-0 sudo[59444]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oltiikvdddpqrbtgnquplrbviwjuhzsf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167550.1370523-290-168625630009957/AnsiballZ_async_status.py'
Jan 23 11:25:54 compute-0 sudo[59444]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:25:54 compute-0 python3.9[59446]: ansible-ansible.legacy.async_status Invoked with jid=j85645768689.58902 mode=cleanup _async_dir=/root/.ansible_async
Jan 23 11:25:54 compute-0 sudo[59444]: pam_unix(sudo:session): session closed for user root
Jan 23 11:25:55 compute-0 sudo[59596]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gwjtzvoysaymgkntyreuphnfehduwwhg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167554.9122088-317-232013327572169/AnsiballZ_stat.py'
Jan 23 11:25:55 compute-0 sudo[59596]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:25:55 compute-0 python3.9[59598]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:25:55 compute-0 sudo[59596]: pam_unix(sudo:session): session closed for user root
Jan 23 11:25:55 compute-0 sudo[59719]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zqccghrintrvjrttvchqcgzeyylpwrpx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167554.9122088-317-232013327572169/AnsiballZ_copy.py'
Jan 23 11:25:55 compute-0 sudo[59719]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:25:55 compute-0 python3.9[59721]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769167554.9122088-317-232013327572169/.source.returncode _original_basename=.vmq6cpj_ follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:25:55 compute-0 sudo[59719]: pam_unix(sudo:session): session closed for user root
Jan 23 11:25:55 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 23 11:25:56 compute-0 sudo[59874]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wbxtllecywqalyaemzovjdgitheomyle ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167556.0486064-333-86157947305235/AnsiballZ_stat.py'
Jan 23 11:25:56 compute-0 sudo[59874]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:25:56 compute-0 python3.9[59876]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:25:56 compute-0 sudo[59874]: pam_unix(sudo:session): session closed for user root
Jan 23 11:25:56 compute-0 sudo[59997]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wxxcitkfohtquzmbvrhtmzoxqaihhxip ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167556.0486064-333-86157947305235/AnsiballZ_copy.py'
Jan 23 11:25:56 compute-0 sudo[59997]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:25:56 compute-0 python3.9[59999]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769167556.0486064-333-86157947305235/.source.cfg _original_basename=.ezk9u_o8 follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:25:56 compute-0 sudo[59997]: pam_unix(sudo:session): session closed for user root
Jan 23 11:25:57 compute-0 sudo[60149]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ezakxvzxjbuetjxfdnswyghjizvbmwoz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167557.1115038-348-165347180141991/AnsiballZ_systemd.py'
Jan 23 11:25:57 compute-0 sudo[60149]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:25:57 compute-0 python3.9[60151]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 23 11:25:57 compute-0 systemd[1]: Reloading Network Manager...
Jan 23 11:25:57 compute-0 NetworkManager[56133]: <info>  [1769167557.7017] audit: op="reload" arg="0" pid=60156 uid=0 result="success"
Jan 23 11:25:57 compute-0 NetworkManager[56133]: <info>  [1769167557.7025] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Jan 23 11:25:57 compute-0 systemd[1]: Reloaded Network Manager.
Jan 23 11:25:57 compute-0 sudo[60149]: pam_unix(sudo:session): session closed for user root
Jan 23 11:25:58 compute-0 sshd-session[52133]: Connection closed by 192.168.122.30 port 39528
Jan 23 11:25:58 compute-0 sshd-session[52130]: pam_unix(sshd:session): session closed for user zuul
Jan 23 11:25:58 compute-0 systemd[1]: session-12.scope: Deactivated successfully.
Jan 23 11:25:58 compute-0 systemd[1]: session-12.scope: Consumed 46.232s CPU time.
Jan 23 11:25:58 compute-0 systemd-logind[798]: Session 12 logged out. Waiting for processes to exit.
Jan 23 11:25:58 compute-0 systemd-logind[798]: Removed session 12.
Jan 23 11:26:00 compute-0 sshd-session[60185]: Invalid user solana from 193.32.162.146 port 41674
Jan 23 11:26:00 compute-0 sshd-session[60185]: Connection closed by invalid user solana 193.32.162.146 port 41674 [preauth]
Jan 23 11:26:03 compute-0 sshd-session[60188]: Accepted publickey for zuul from 192.168.122.30 port 41818 ssh2: ECDSA SHA256:AUEDGm/wgPOySUg5KweIs4KJvJDZMkuE7T7y2BxO92Y
Jan 23 11:26:03 compute-0 systemd-logind[798]: New session 13 of user zuul.
Jan 23 11:26:03 compute-0 systemd[1]: Started Session 13 of User zuul.
Jan 23 11:26:03 compute-0 sshd-session[60188]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 23 11:26:04 compute-0 python3.9[60342]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 11:26:05 compute-0 python3.9[60496]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 23 11:26:06 compute-0 python3.9[60685]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 11:26:06 compute-0 sshd-session[60191]: Connection closed by 192.168.122.30 port 41818
Jan 23 11:26:06 compute-0 sshd-session[60188]: pam_unix(sshd:session): session closed for user zuul
Jan 23 11:26:06 compute-0 systemd[1]: session-13.scope: Deactivated successfully.
Jan 23 11:26:06 compute-0 systemd[1]: session-13.scope: Consumed 2.039s CPU time.
Jan 23 11:26:06 compute-0 systemd-logind[798]: Session 13 logged out. Waiting for processes to exit.
Jan 23 11:26:06 compute-0 systemd-logind[798]: Removed session 13.
Jan 23 11:26:07 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 23 11:26:12 compute-0 sshd-session[60714]: Accepted publickey for zuul from 192.168.122.30 port 41820 ssh2: ECDSA SHA256:AUEDGm/wgPOySUg5KweIs4KJvJDZMkuE7T7y2BxO92Y
Jan 23 11:26:12 compute-0 systemd-logind[798]: New session 14 of user zuul.
Jan 23 11:26:12 compute-0 systemd[1]: Started Session 14 of User zuul.
Jan 23 11:26:12 compute-0 sshd-session[60714]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 23 11:26:13 compute-0 python3.9[60867]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 11:26:14 compute-0 python3.9[61022]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 11:26:14 compute-0 sudo[61176]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rufttbeoadpjzxhzvqtbesyqsukoxleu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167574.5049686-35-34570453236640/AnsiballZ_setup.py'
Jan 23 11:26:14 compute-0 sudo[61176]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:26:15 compute-0 python3.9[61178]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 23 11:26:15 compute-0 sudo[61176]: pam_unix(sudo:session): session closed for user root
Jan 23 11:26:15 compute-0 sudo[61260]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fwzaabdljtoflkgdrtjujtncfwrskqsm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167574.5049686-35-34570453236640/AnsiballZ_dnf.py'
Jan 23 11:26:15 compute-0 sudo[61260]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:26:15 compute-0 python3.9[61262]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 23 11:26:17 compute-0 sudo[61260]: pam_unix(sudo:session): session closed for user root
Jan 23 11:26:17 compute-0 sudo[61414]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qrxxktcchceqnxjlnfnuvfmcghllhbhq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167577.2220724-47-185419328439520/AnsiballZ_setup.py'
Jan 23 11:26:17 compute-0 sudo[61414]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:26:17 compute-0 python3.9[61416]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 23 11:26:17 compute-0 sudo[61414]: pam_unix(sudo:session): session closed for user root
Jan 23 11:26:18 compute-0 sudo[61605]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fozgwegtljotuavenhctenroygxdxcob ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167578.1677935-58-182871891759129/AnsiballZ_file.py'
Jan 23 11:26:18 compute-0 sudo[61605]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:26:18 compute-0 python3.9[61607]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:26:18 compute-0 sudo[61605]: pam_unix(sudo:session): session closed for user root
Jan 23 11:26:19 compute-0 sudo[61757]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rcfwijekfmafhtabbyioyaxyrhvcuafl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167578.9057403-66-233914769415418/AnsiballZ_command.py'
Jan 23 11:26:19 compute-0 sudo[61757]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:26:19 compute-0 python3.9[61759]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 11:26:19 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 23 11:26:19 compute-0 sudo[61757]: pam_unix(sudo:session): session closed for user root
Jan 23 11:26:20 compute-0 sudo[61921]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xakiqyseoyfdvlcufoglxpcqvgdanswi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167579.6567798-74-18898994424110/AnsiballZ_stat.py'
Jan 23 11:26:20 compute-0 sudo[61921]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:26:20 compute-0 python3.9[61923]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:26:20 compute-0 sudo[61921]: pam_unix(sudo:session): session closed for user root
Jan 23 11:26:20 compute-0 sudo[61999]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jsweuyksaneuvntduuflsxtllqihbyuy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167579.6567798-74-18898994424110/AnsiballZ_file.py'
Jan 23 11:26:20 compute-0 sudo[61999]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:26:20 compute-0 python3.9[62001]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:26:20 compute-0 sudo[61999]: pam_unix(sudo:session): session closed for user root
Jan 23 11:26:21 compute-0 sudo[62151]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-msankpjxijjesbryakpuuxdapmqjgaij ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167580.8047786-86-157465525706406/AnsiballZ_stat.py'
Jan 23 11:26:21 compute-0 sudo[62151]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:26:21 compute-0 python3.9[62153]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:26:21 compute-0 sudo[62151]: pam_unix(sudo:session): session closed for user root
Jan 23 11:26:21 compute-0 sudo[62229]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hgwdzqdpkhotvmggbcclostyhamdwfzq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167580.8047786-86-157465525706406/AnsiballZ_file.py'
Jan 23 11:26:21 compute-0 sudo[62229]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:26:21 compute-0 python3.9[62231]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 11:26:21 compute-0 sudo[62229]: pam_unix(sudo:session): session closed for user root
Jan 23 11:26:22 compute-0 sudo[62381]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gedxtmpnvyxgtviqvpfilpxvoblnsuvc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167581.826589-99-209673594499086/AnsiballZ_ini_file.py'
Jan 23 11:26:22 compute-0 sudo[62381]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:26:22 compute-0 python3.9[62383]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 23 11:26:22 compute-0 sudo[62381]: pam_unix(sudo:session): session closed for user root
Jan 23 11:26:22 compute-0 sudo[62533]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-meehsxhukkidxzagtszylzzewsfhydsh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167582.5604756-99-241873345758951/AnsiballZ_ini_file.py'
Jan 23 11:26:22 compute-0 sudo[62533]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:26:22 compute-0 python3.9[62535]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 23 11:26:22 compute-0 sudo[62533]: pam_unix(sudo:session): session closed for user root
Jan 23 11:26:23 compute-0 sudo[62685]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tmalpzjecwknahjiymifmzyzkqohagsg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167583.10637-99-109997547355706/AnsiballZ_ini_file.py'
Jan 23 11:26:23 compute-0 sudo[62685]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:26:23 compute-0 python3.9[62687]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 23 11:26:23 compute-0 sudo[62685]: pam_unix(sudo:session): session closed for user root
Jan 23 11:26:23 compute-0 sudo[62837]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mgozgyomfotnyfsdgjlwkuleypvrlxrk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167583.6692886-99-192916561192653/AnsiballZ_ini_file.py'
Jan 23 11:26:23 compute-0 sudo[62837]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:26:24 compute-0 python3.9[62839]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 23 11:26:24 compute-0 sudo[62837]: pam_unix(sudo:session): session closed for user root
Jan 23 11:26:24 compute-0 sudo[62989]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pwspustrohmxcjryerwsysudwsfgrrza ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167584.3240232-130-52020366876530/AnsiballZ_dnf.py'
Jan 23 11:26:24 compute-0 sudo[62989]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:26:24 compute-0 python3.9[62991]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 23 11:26:26 compute-0 sudo[62989]: pam_unix(sudo:session): session closed for user root
Jan 23 11:26:26 compute-0 sudo[63142]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qsnledbpsfymhsdoduqjdzwzkrqunlld ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167586.3792126-141-77154748370099/AnsiballZ_setup.py'
Jan 23 11:26:26 compute-0 sudo[63142]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:26:26 compute-0 python3.9[63144]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 11:26:26 compute-0 sudo[63142]: pam_unix(sudo:session): session closed for user root
Jan 23 11:26:27 compute-0 sudo[63296]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wrkpspdsasikbjwxytofufftkkizpfgf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167587.067868-149-264806154053160/AnsiballZ_stat.py'
Jan 23 11:26:27 compute-0 sudo[63296]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:26:27 compute-0 python3.9[63298]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 11:26:27 compute-0 sudo[63296]: pam_unix(sudo:session): session closed for user root
Jan 23 11:26:27 compute-0 sudo[63448]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vlkfedtilsyyovemleileooifgbgdvsz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167587.6812263-158-147006451993065/AnsiballZ_stat.py'
Jan 23 11:26:27 compute-0 sudo[63448]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:26:28 compute-0 python3.9[63450]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 11:26:28 compute-0 sudo[63448]: pam_unix(sudo:session): session closed for user root
Jan 23 11:26:28 compute-0 sudo[63600]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zcrmfondbgbwgiymuxpjnorfvmubffth ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167588.2898686-168-257517061977356/AnsiballZ_command.py'
Jan 23 11:26:28 compute-0 sudo[63600]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:26:28 compute-0 python3.9[63602]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 11:26:28 compute-0 sudo[63600]: pam_unix(sudo:session): session closed for user root
Jan 23 11:26:29 compute-0 sudo[63753]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ybsjwzjomtmseexdvxbyyyxapssjgtpo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167588.9873877-178-7573981862174/AnsiballZ_service_facts.py'
Jan 23 11:26:29 compute-0 sudo[63753]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:26:29 compute-0 python3.9[63755]: ansible-service_facts Invoked
Jan 23 11:26:29 compute-0 network[63772]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 23 11:26:29 compute-0 network[63773]: 'network-scripts' will be removed from distribution in near future.
Jan 23 11:26:29 compute-0 network[63774]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 23 11:26:32 compute-0 sudo[63753]: pam_unix(sudo:session): session closed for user root
Jan 23 11:26:33 compute-0 sudo[64057]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qwtxjjwimoyokpldljqtvgfgtbtiuzry ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1769167593.5636292-193-134009062177684/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1769167593.5636292-193-134009062177684/args'
Jan 23 11:26:33 compute-0 sudo[64057]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:26:33 compute-0 sudo[64057]: pam_unix(sudo:session): session closed for user root
Jan 23 11:26:34 compute-0 sudo[64224]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-spzsksnoyfwyekbydsvxhiabfifepieu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167594.1755607-204-58358809367826/AnsiballZ_dnf.py'
Jan 23 11:26:34 compute-0 sudo[64224]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:26:34 compute-0 python3.9[64226]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 23 11:26:35 compute-0 sudo[64224]: pam_unix(sudo:session): session closed for user root
Jan 23 11:26:36 compute-0 sudo[64377]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eophuucqoyqdmsxcoraryfggjtsjchlq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167596.1519845-217-227618236795989/AnsiballZ_package_facts.py'
Jan 23 11:26:36 compute-0 sudo[64377]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:26:37 compute-0 python3.9[64379]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Jan 23 11:26:37 compute-0 sudo[64377]: pam_unix(sudo:session): session closed for user root
Jan 23 11:26:37 compute-0 sudo[64529]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kdkcnuupsheuvmihyjlcfypkdvmlkuls ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167597.7014756-227-55171178592473/AnsiballZ_stat.py'
Jan 23 11:26:37 compute-0 sudo[64529]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:26:38 compute-0 python3.9[64531]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:26:38 compute-0 sudo[64529]: pam_unix(sudo:session): session closed for user root
Jan 23 11:26:38 compute-0 sudo[64654]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqtffxezrpilcdsrxbrnyoymwsxiplwn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167597.7014756-227-55171178592473/AnsiballZ_copy.py'
Jan 23 11:26:38 compute-0 sudo[64654]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:26:38 compute-0 python3.9[64656]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769167597.7014756-227-55171178592473/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:26:38 compute-0 sudo[64654]: pam_unix(sudo:session): session closed for user root
Jan 23 11:26:39 compute-0 sudo[64808]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ypeeuuclcjubyffybhnykitgxdemdhrd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167599.1529918-242-53408689585358/AnsiballZ_stat.py'
Jan 23 11:26:39 compute-0 sudo[64808]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:26:39 compute-0 python3.9[64810]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:26:39 compute-0 sudo[64808]: pam_unix(sudo:session): session closed for user root
Jan 23 11:26:39 compute-0 sudo[64933]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yxghsxmcmuhnmggywpdadjmsjgapfsgt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167599.1529918-242-53408689585358/AnsiballZ_copy.py'
Jan 23 11:26:39 compute-0 sudo[64933]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:26:40 compute-0 python3.9[64935]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769167599.1529918-242-53408689585358/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:26:40 compute-0 sudo[64933]: pam_unix(sudo:session): session closed for user root
Jan 23 11:26:41 compute-0 sudo[65087]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kitunhgktwepbkbwutcbvviociamrwgf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167600.661604-263-190858431156491/AnsiballZ_lineinfile.py'
Jan 23 11:26:41 compute-0 sudo[65087]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:26:41 compute-0 python3.9[65089]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:26:41 compute-0 sudo[65087]: pam_unix(sudo:session): session closed for user root
Jan 23 11:26:42 compute-0 sudo[65241]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ofuydriselngmgmrvixltesffyqoaziu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167601.8049495-278-248712941405018/AnsiballZ_setup.py'
Jan 23 11:26:42 compute-0 sudo[65241]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:26:42 compute-0 python3.9[65243]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 23 11:26:42 compute-0 sudo[65241]: pam_unix(sudo:session): session closed for user root
Jan 23 11:26:43 compute-0 sudo[65325]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gkxxsrhutskszgqddeewohownztnmszy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167601.8049495-278-248712941405018/AnsiballZ_systemd.py'
Jan 23 11:26:43 compute-0 sudo[65325]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:26:43 compute-0 python3.9[65327]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 11:26:43 compute-0 sudo[65325]: pam_unix(sudo:session): session closed for user root
Jan 23 11:26:44 compute-0 sudo[65479]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lyurmfmxnkozwxtftbpuqaeqowcounjb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167603.9448266-294-54832446179009/AnsiballZ_setup.py'
Jan 23 11:26:44 compute-0 sudo[65479]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:26:44 compute-0 python3.9[65481]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 23 11:26:44 compute-0 sudo[65479]: pam_unix(sudo:session): session closed for user root
Jan 23 11:26:44 compute-0 sudo[65563]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tbzijgwoohbdmjmxdtrkthefcqdcgiop ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167603.9448266-294-54832446179009/AnsiballZ_systemd.py'
Jan 23 11:26:44 compute-0 sudo[65563]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:26:45 compute-0 python3.9[65565]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 23 11:26:45 compute-0 chronyd[792]: chronyd exiting
Jan 23 11:26:45 compute-0 systemd[1]: Stopping NTP client/server...
Jan 23 11:26:45 compute-0 systemd[1]: chronyd.service: Deactivated successfully.
Jan 23 11:26:45 compute-0 systemd[1]: Stopped NTP client/server.
Jan 23 11:26:45 compute-0 systemd[1]: Starting NTP client/server...
Jan 23 11:26:45 compute-0 chronyd[65574]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Jan 23 11:26:45 compute-0 chronyd[65574]: Frequency -25.011 +/- 0.063 ppm read from /var/lib/chrony/drift
Jan 23 11:26:45 compute-0 chronyd[65574]: Loaded seccomp filter (level 2)
Jan 23 11:26:45 compute-0 systemd[1]: Started NTP client/server.
Jan 23 11:26:45 compute-0 sudo[65563]: pam_unix(sudo:session): session closed for user root
Jan 23 11:26:45 compute-0 sshd-session[60717]: Connection closed by 192.168.122.30 port 41820
Jan 23 11:26:45 compute-0 sshd-session[60714]: pam_unix(sshd:session): session closed for user zuul
Jan 23 11:26:45 compute-0 systemd[1]: session-14.scope: Deactivated successfully.
Jan 23 11:26:45 compute-0 systemd[1]: session-14.scope: Consumed 22.591s CPU time.
Jan 23 11:26:45 compute-0 systemd-logind[798]: Session 14 logged out. Waiting for processes to exit.
Jan 23 11:26:45 compute-0 systemd-logind[798]: Removed session 14.
Jan 23 11:26:52 compute-0 sshd-session[65600]: Accepted publickey for zuul from 192.168.122.30 port 46698 ssh2: ECDSA SHA256:AUEDGm/wgPOySUg5KweIs4KJvJDZMkuE7T7y2BxO92Y
Jan 23 11:26:52 compute-0 systemd-logind[798]: New session 15 of user zuul.
Jan 23 11:26:52 compute-0 systemd[1]: Started Session 15 of User zuul.
Jan 23 11:26:52 compute-0 sshd-session[65600]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 23 11:26:53 compute-0 python3.9[65753]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 11:26:54 compute-0 sudo[65907]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cqkafllykgbithahepztnugtethpajeh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167613.9755166-28-244987589135739/AnsiballZ_file.py'
Jan 23 11:26:54 compute-0 sudo[65907]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:26:54 compute-0 python3.9[65909]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:26:54 compute-0 sudo[65907]: pam_unix(sudo:session): session closed for user root
Jan 23 11:26:55 compute-0 sudo[66082]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zufadyogfrdkdqlbdbzzggoironcfshw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167614.7543418-36-193317492778411/AnsiballZ_stat.py'
Jan 23 11:26:55 compute-0 sudo[66082]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:26:55 compute-0 python3.9[66084]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:26:55 compute-0 sudo[66082]: pam_unix(sudo:session): session closed for user root
Jan 23 11:26:55 compute-0 sudo[66160]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pdjaqyinlmvprhthtypwyojypdhpakja ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167614.7543418-36-193317492778411/AnsiballZ_file.py'
Jan 23 11:26:55 compute-0 sudo[66160]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:26:55 compute-0 python3.9[66162]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.8mgni5bb recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:26:55 compute-0 sudo[66160]: pam_unix(sudo:session): session closed for user root
Jan 23 11:26:56 compute-0 sudo[66312]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lhmovsntdpaobdrenjfvguztfyvyqaoz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167616.0958147-56-263743518091430/AnsiballZ_stat.py'
Jan 23 11:26:56 compute-0 sudo[66312]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:26:56 compute-0 python3.9[66314]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:26:56 compute-0 sudo[66312]: pam_unix(sudo:session): session closed for user root
Jan 23 11:26:57 compute-0 sudo[66435]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zqtgpnkfcnrmppmetvqytivbpmivibmq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167616.0958147-56-263743518091430/AnsiballZ_copy.py'
Jan 23 11:26:57 compute-0 sudo[66435]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:26:57 compute-0 python3.9[66437]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769167616.0958147-56-263743518091430/.source _original_basename=.5fwby_s1 follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:26:57 compute-0 sudo[66435]: pam_unix(sudo:session): session closed for user root
Jan 23 11:26:58 compute-0 sudo[66587]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zoesvpahtvmslvqwjzvhectjmgpryjtx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167617.82443-72-82090766785835/AnsiballZ_file.py'
Jan 23 11:26:58 compute-0 sudo[66587]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:26:58 compute-0 python3.9[66589]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 23 11:26:58 compute-0 sudo[66587]: pam_unix(sudo:session): session closed for user root
Jan 23 11:26:58 compute-0 sudo[66739]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vkwlccqnqgruaxnqerdyleybtxcpowzf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167618.5440683-80-34123836755817/AnsiballZ_stat.py'
Jan 23 11:26:58 compute-0 sudo[66739]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:26:59 compute-0 python3.9[66741]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:26:59 compute-0 sudo[66739]: pam_unix(sudo:session): session closed for user root
Jan 23 11:26:59 compute-0 sudo[66862]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dslrsrsvbkqffcxgcbbjnbzxcfxsajwp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167618.5440683-80-34123836755817/AnsiballZ_copy.py'
Jan 23 11:26:59 compute-0 sudo[66862]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:26:59 compute-0 python3.9[66864]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769167618.5440683-80-34123836755817/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 23 11:26:59 compute-0 sudo[66862]: pam_unix(sudo:session): session closed for user root
Jan 23 11:27:00 compute-0 sudo[67014]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-znskrruiwcbzaxuvwirxajisordkmqrw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167619.7862673-80-274973048458548/AnsiballZ_stat.py'
Jan 23 11:27:00 compute-0 sudo[67014]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:27:00 compute-0 python3.9[67016]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:27:00 compute-0 sudo[67014]: pam_unix(sudo:session): session closed for user root
Jan 23 11:27:00 compute-0 sudo[67137]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wyobvgcdkrdauldtaxxjsfhktunuefet ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167619.7862673-80-274973048458548/AnsiballZ_copy.py'
Jan 23 11:27:00 compute-0 sudo[67137]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:27:00 compute-0 python3.9[67139]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769167619.7862673-80-274973048458548/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 23 11:27:00 compute-0 sudo[67137]: pam_unix(sudo:session): session closed for user root
Jan 23 11:27:01 compute-0 sudo[67289]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fnjtksweaetokwxwyofnbugvyqswkmqv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167620.8969548-109-131377400422115/AnsiballZ_file.py'
Jan 23 11:27:01 compute-0 sudo[67289]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:27:01 compute-0 python3.9[67291]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:27:01 compute-0 sudo[67289]: pam_unix(sudo:session): session closed for user root
Jan 23 11:27:01 compute-0 sudo[67441]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qpkponfixhiztyorucwjvmyacqoebpzw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167621.5189347-117-196140653918233/AnsiballZ_stat.py'
Jan 23 11:27:01 compute-0 sudo[67441]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:27:02 compute-0 python3.9[67443]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:27:02 compute-0 sudo[67441]: pam_unix(sudo:session): session closed for user root
Jan 23 11:27:02 compute-0 sudo[67564]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-brtelmzugnnjsikovouhjdznvmbkawzu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167621.5189347-117-196140653918233/AnsiballZ_copy.py'
Jan 23 11:27:02 compute-0 sudo[67564]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:27:02 compute-0 python3.9[67566]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769167621.5189347-117-196140653918233/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:27:02 compute-0 sudo[67564]: pam_unix(sudo:session): session closed for user root
Jan 23 11:27:03 compute-0 sudo[67716]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-plxhdqdgzjaukfrjvztlieunidkggwic ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167622.7539897-132-105484507750873/AnsiballZ_stat.py'
Jan 23 11:27:03 compute-0 sudo[67716]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:27:03 compute-0 python3.9[67718]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:27:03 compute-0 sudo[67716]: pam_unix(sudo:session): session closed for user root
Jan 23 11:27:03 compute-0 sudo[67839]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eedodbjvngsoegsqeqqbyhqaxyuobqin ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167622.7539897-132-105484507750873/AnsiballZ_copy.py'
Jan 23 11:27:03 compute-0 sudo[67839]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:27:03 compute-0 python3.9[67841]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769167622.7539897-132-105484507750873/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:27:03 compute-0 sudo[67839]: pam_unix(sudo:session): session closed for user root
Jan 23 11:27:04 compute-0 sudo[67991]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pshqgvchywqmonjoiwgjqiwidfsoabxj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167623.928878-147-149231363218993/AnsiballZ_systemd.py'
Jan 23 11:27:04 compute-0 sudo[67991]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:27:04 compute-0 python3.9[67993]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 11:27:04 compute-0 systemd[1]: Reloading.
Jan 23 11:27:04 compute-0 systemd-sysv-generator[68025]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 11:27:04 compute-0 systemd-rc-local-generator[68020]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 11:27:05 compute-0 systemd[1]: Reloading.
Jan 23 11:27:05 compute-0 systemd-rc-local-generator[68058]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 11:27:05 compute-0 systemd-sysv-generator[68062]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 11:27:05 compute-0 systemd[1]: Starting EDPM Container Shutdown...
Jan 23 11:27:05 compute-0 systemd[1]: Finished EDPM Container Shutdown.
Jan 23 11:27:05 compute-0 sudo[67991]: pam_unix(sudo:session): session closed for user root
Jan 23 11:27:05 compute-0 sudo[68220]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-laglxwrqtfdsibsdcropfditdgfuxeed ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167625.5046272-155-169946820328092/AnsiballZ_stat.py'
Jan 23 11:27:05 compute-0 sudo[68220]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:27:05 compute-0 python3.9[68222]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:27:05 compute-0 sudo[68220]: pam_unix(sudo:session): session closed for user root
Jan 23 11:27:06 compute-0 sudo[68343]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-syersmralwmyoerseyvdafkkcpwerspb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167625.5046272-155-169946820328092/AnsiballZ_copy.py'
Jan 23 11:27:06 compute-0 sudo[68343]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:27:06 compute-0 python3.9[68345]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769167625.5046272-155-169946820328092/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:27:06 compute-0 sudo[68343]: pam_unix(sudo:session): session closed for user root
Jan 23 11:27:06 compute-0 sudo[68495]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jatadvlwyglnxhlfvonxekzooljknwav ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167626.6917067-170-10385092015684/AnsiballZ_stat.py'
Jan 23 11:27:06 compute-0 sudo[68495]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:27:07 compute-0 python3.9[68497]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:27:07 compute-0 sudo[68495]: pam_unix(sudo:session): session closed for user root
Jan 23 11:27:07 compute-0 sudo[68618]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-icumiouzsvixbvbmkfgjxajusyrymxoq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167626.6917067-170-10385092015684/AnsiballZ_copy.py'
Jan 23 11:27:07 compute-0 sudo[68618]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:27:07 compute-0 python3.9[68620]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769167626.6917067-170-10385092015684/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:27:07 compute-0 sudo[68618]: pam_unix(sudo:session): session closed for user root
Jan 23 11:27:07 compute-0 sudo[68770]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhnmkefpvvsmcqcvvtlynephrucvfrvj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167627.7283425-185-232224686734133/AnsiballZ_systemd.py'
Jan 23 11:27:07 compute-0 sudo[68770]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:27:08 compute-0 python3.9[68772]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 11:27:08 compute-0 systemd[1]: Reloading.
Jan 23 11:27:08 compute-0 systemd-rc-local-generator[68799]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 11:27:08 compute-0 systemd-sysv-generator[68804]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 11:27:08 compute-0 systemd[1]: Reloading.
Jan 23 11:27:08 compute-0 systemd-rc-local-generator[68836]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 11:27:08 compute-0 systemd-sysv-generator[68839]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 11:27:08 compute-0 systemd[1]: Starting Create netns directory...
Jan 23 11:27:08 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 23 11:27:08 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 23 11:27:08 compute-0 systemd[1]: Finished Create netns directory.
Jan 23 11:27:08 compute-0 sudo[68770]: pam_unix(sudo:session): session closed for user root
Jan 23 11:27:09 compute-0 python3.9[68998]: ansible-ansible.builtin.service_facts Invoked
Jan 23 11:27:09 compute-0 network[69015]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 23 11:27:09 compute-0 network[69016]: 'network-scripts' will be removed from distribution in near future.
Jan 23 11:27:09 compute-0 network[69017]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 23 11:27:12 compute-0 sudo[69277]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wzkorjplslkisjniwkqdwwkkchudaffq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167632.394002-201-93269562673007/AnsiballZ_systemd.py'
Jan 23 11:27:12 compute-0 sudo[69277]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:27:12 compute-0 python3.9[69279]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 11:27:13 compute-0 systemd[1]: Reloading.
Jan 23 11:27:13 compute-0 systemd-rc-local-generator[69304]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 11:27:13 compute-0 systemd-sysv-generator[69307]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 11:27:13 compute-0 systemd[1]: Stopping IPv4 firewall with iptables...
Jan 23 11:27:13 compute-0 iptables.init[69318]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Jan 23 11:27:13 compute-0 iptables.init[69318]: iptables: Flushing firewall rules: [  OK  ]
Jan 23 11:27:13 compute-0 systemd[1]: iptables.service: Deactivated successfully.
Jan 23 11:27:13 compute-0 systemd[1]: Stopped IPv4 firewall with iptables.
Jan 23 11:27:13 compute-0 sudo[69277]: pam_unix(sudo:session): session closed for user root
Jan 23 11:27:13 compute-0 sudo[69512]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yzwaxfxyqetuqjiiirmlnddkdfvqcrok ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167633.668039-201-79875887052777/AnsiballZ_systemd.py'
Jan 23 11:27:13 compute-0 sudo[69512]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:27:14 compute-0 python3.9[69514]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 11:27:14 compute-0 sudo[69512]: pam_unix(sudo:session): session closed for user root
Jan 23 11:27:14 compute-0 sudo[69666]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zjztkwnklajnbhrawlhcyqhgfyujpcoj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167634.5342927-217-277657345782928/AnsiballZ_systemd.py'
Jan 23 11:27:14 compute-0 sudo[69666]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:27:15 compute-0 python3.9[69668]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 11:27:15 compute-0 systemd[1]: Reloading.
Jan 23 11:27:15 compute-0 systemd-rc-local-generator[69692]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 11:27:15 compute-0 systemd-sysv-generator[69697]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 11:27:15 compute-0 systemd[1]: Starting Netfilter Tables...
Jan 23 11:27:15 compute-0 systemd[1]: Finished Netfilter Tables.
Jan 23 11:27:15 compute-0 sudo[69666]: pam_unix(sudo:session): session closed for user root
Jan 23 11:27:16 compute-0 sudo[69858]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hxtwgxozbyqanjpeneqpkkvplxcgkody ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167635.5934327-225-83173335778604/AnsiballZ_command.py'
Jan 23 11:27:16 compute-0 sudo[69858]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:27:16 compute-0 python3.9[69860]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 11:27:16 compute-0 sudo[69858]: pam_unix(sudo:session): session closed for user root
Jan 23 11:27:16 compute-0 sudo[70011]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-paaozqxbtxwwfyckwozfmonghbznpzza ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167636.6545923-239-197500692776357/AnsiballZ_stat.py'
Jan 23 11:27:16 compute-0 sudo[70011]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:27:17 compute-0 python3.9[70013]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:27:17 compute-0 sudo[70011]: pam_unix(sudo:session): session closed for user root
Jan 23 11:27:17 compute-0 sudo[70136]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dsdyirmzhjvhtfeshjubvhqpmhnllmyb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167636.6545923-239-197500692776357/AnsiballZ_copy.py'
Jan 23 11:27:17 compute-0 sudo[70136]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:27:17 compute-0 python3.9[70138]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769167636.6545923-239-197500692776357/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:27:17 compute-0 sudo[70136]: pam_unix(sudo:session): session closed for user root
Jan 23 11:27:18 compute-0 sudo[70289]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uvaxjwrqbeypeyurmxxbzfrdlshjzmdu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167638.0983183-254-148932525284362/AnsiballZ_systemd.py'
Jan 23 11:27:18 compute-0 sudo[70289]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:27:18 compute-0 python3.9[70291]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 23 11:27:18 compute-0 systemd[1]: Reloading OpenSSH server daemon...
Jan 23 11:27:18 compute-0 sshd[1007]: Received SIGHUP; restarting.
Jan 23 11:27:18 compute-0 systemd[1]: Reloaded OpenSSH server daemon.
Jan 23 11:27:18 compute-0 sshd[1007]: Server listening on 0.0.0.0 port 22.
Jan 23 11:27:18 compute-0 sshd[1007]: Server listening on :: port 22.
Jan 23 11:27:18 compute-0 sudo[70289]: pam_unix(sudo:session): session closed for user root
Jan 23 11:27:19 compute-0 sudo[70445]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uxljgiozzdvtvecfwephshfgvapaqdyj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167639.2639928-262-163379627926583/AnsiballZ_file.py'
Jan 23 11:27:19 compute-0 sudo[70445]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:27:19 compute-0 python3.9[70447]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:27:19 compute-0 sudo[70445]: pam_unix(sudo:session): session closed for user root
Jan 23 11:27:20 compute-0 sudo[70597]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sjafqvbkcotghttyknqgibvdttyhwgix ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167639.9097354-270-95705848374742/AnsiballZ_stat.py'
Jan 23 11:27:20 compute-0 sudo[70597]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:27:20 compute-0 python3.9[70599]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:27:20 compute-0 sudo[70597]: pam_unix(sudo:session): session closed for user root
Jan 23 11:27:20 compute-0 sudo[70720]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-feilzzyeiybujvchhbvrssonmequfnoj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167639.9097354-270-95705848374742/AnsiballZ_copy.py'
Jan 23 11:27:20 compute-0 sudo[70720]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:27:20 compute-0 python3.9[70722]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769167639.9097354-270-95705848374742/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:27:20 compute-0 sudo[70720]: pam_unix(sudo:session): session closed for user root
Jan 23 11:27:21 compute-0 sudo[70872]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sqywdzgzjtvjtldtfpgopvaekckplhwg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167641.234052-288-65326440678897/AnsiballZ_timezone.py'
Jan 23 11:27:21 compute-0 sudo[70872]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:27:21 compute-0 python3.9[70874]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Jan 23 11:27:21 compute-0 systemd[1]: Starting Time & Date Service...
Jan 23 11:27:22 compute-0 systemd[1]: Started Time & Date Service.
Jan 23 11:27:22 compute-0 sudo[70872]: pam_unix(sudo:session): session closed for user root
Jan 23 11:27:22 compute-0 sudo[71028]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvhuglfekopzqqqvaxkiwkuhzlnusdtx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167642.320851-297-16895578973047/AnsiballZ_file.py'
Jan 23 11:27:22 compute-0 sudo[71028]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:27:22 compute-0 python3.9[71030]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:27:22 compute-0 sudo[71028]: pam_unix(sudo:session): session closed for user root
Jan 23 11:27:23 compute-0 sudo[71180]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sslkepjbzqmxbecnhudtkacstgbgergs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167642.9401593-305-72016409497676/AnsiballZ_stat.py'
Jan 23 11:27:23 compute-0 sudo[71180]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:27:23 compute-0 python3.9[71182]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:27:23 compute-0 sudo[71180]: pam_unix(sudo:session): session closed for user root
Jan 23 11:27:23 compute-0 sudo[71303]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zgrqqosblwyhzfzrdkzvaasjcvnxhrns ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167642.9401593-305-72016409497676/AnsiballZ_copy.py'
Jan 23 11:27:23 compute-0 sudo[71303]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:27:24 compute-0 python3.9[71305]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769167642.9401593-305-72016409497676/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:27:24 compute-0 sudo[71303]: pam_unix(sudo:session): session closed for user root
Jan 23 11:27:24 compute-0 sudo[71455]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kwtbjmiuvqxxgxqotrfcaretenigajei ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167644.19463-320-34785174984663/AnsiballZ_stat.py'
Jan 23 11:27:24 compute-0 sudo[71455]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:27:24 compute-0 python3.9[71457]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:27:24 compute-0 sudo[71455]: pam_unix(sudo:session): session closed for user root
Jan 23 11:27:24 compute-0 sudo[71578]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mtehujgyyeqkywjibwyurxfcpunwmaxa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167644.19463-320-34785174984663/AnsiballZ_copy.py'
Jan 23 11:27:24 compute-0 sudo[71578]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:27:25 compute-0 python3.9[71580]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769167644.19463-320-34785174984663/.source.yaml _original_basename=.wj6rwbj4 follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:27:25 compute-0 sudo[71578]: pam_unix(sudo:session): session closed for user root
Jan 23 11:27:25 compute-0 sudo[71730]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oboxfhdwpcqkpwwljfmboohdoanxhcqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167645.3170338-335-133423593770934/AnsiballZ_stat.py'
Jan 23 11:27:25 compute-0 sudo[71730]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:27:25 compute-0 python3.9[71732]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:27:25 compute-0 sudo[71730]: pam_unix(sudo:session): session closed for user root
Jan 23 11:27:26 compute-0 sudo[71853]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hgpkohluoourzwdjvzzskonritfthabj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167645.3170338-335-133423593770934/AnsiballZ_copy.py'
Jan 23 11:27:26 compute-0 sudo[71853]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:27:26 compute-0 python3.9[71855]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769167645.3170338-335-133423593770934/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:27:26 compute-0 sudo[71853]: pam_unix(sudo:session): session closed for user root
Jan 23 11:27:26 compute-0 sudo[72005]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-witaffkxuyjuxobrpmzsvacywxihodux ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167646.363622-350-204374703260096/AnsiballZ_command.py'
Jan 23 11:27:26 compute-0 sudo[72005]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:27:26 compute-0 python3.9[72007]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 11:27:26 compute-0 sudo[72005]: pam_unix(sudo:session): session closed for user root
Jan 23 11:27:27 compute-0 sudo[72158]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aedambzcyyaahlmozjwogyvlbmrdecid ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167646.9326599-358-105821846795607/AnsiballZ_command.py'
Jan 23 11:27:27 compute-0 sudo[72158]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:27:27 compute-0 python3.9[72160]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 11:27:27 compute-0 sudo[72158]: pam_unix(sudo:session): session closed for user root
Jan 23 11:27:28 compute-0 sudo[72311]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uqntpuvpnoqtgpuaumznmneqsehliimy ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769167647.7252486-366-127594816544209/AnsiballZ_edpm_nftables_from_files.py'
Jan 23 11:27:28 compute-0 sudo[72311]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:27:28 compute-0 python3[72313]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 23 11:27:28 compute-0 sudo[72311]: pam_unix(sudo:session): session closed for user root
Jan 23 11:27:28 compute-0 sudo[72463]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ebkgrdsficdhiknweksoqtqtcfnwnxrj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167648.601041-374-190266558082129/AnsiballZ_stat.py'
Jan 23 11:27:28 compute-0 sudo[72463]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:27:29 compute-0 python3.9[72465]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:27:29 compute-0 sudo[72463]: pam_unix(sudo:session): session closed for user root
Jan 23 11:27:29 compute-0 sudo[72586]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wlakclrelqztjnqyerxtyuzdkikvmble ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167648.601041-374-190266558082129/AnsiballZ_copy.py'
Jan 23 11:27:29 compute-0 sudo[72586]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:27:29 compute-0 python3.9[72588]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769167648.601041-374-190266558082129/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:27:29 compute-0 sudo[72586]: pam_unix(sudo:session): session closed for user root
Jan 23 11:27:30 compute-0 sudo[72738]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aysgyoamedzxphslwwruusqjzgrbjcji ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167649.7120728-389-831085480364/AnsiballZ_stat.py'
Jan 23 11:27:30 compute-0 sudo[72738]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:27:30 compute-0 python3.9[72740]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:27:30 compute-0 sudo[72738]: pam_unix(sudo:session): session closed for user root
Jan 23 11:27:30 compute-0 sudo[72861]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qyymnxelxdvmuhnohaykbzduwinwfgts ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167649.7120728-389-831085480364/AnsiballZ_copy.py'
Jan 23 11:27:30 compute-0 sudo[72861]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:27:30 compute-0 python3.9[72863]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769167649.7120728-389-831085480364/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:27:30 compute-0 sudo[72861]: pam_unix(sudo:session): session closed for user root
Jan 23 11:27:31 compute-0 sudo[73013]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fgfhmzmalnobvoxfnduofmbnzvslkqjf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167651.008077-404-119975591000315/AnsiballZ_stat.py'
Jan 23 11:27:31 compute-0 sudo[73013]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:27:31 compute-0 python3.9[73015]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:27:31 compute-0 sudo[73013]: pam_unix(sudo:session): session closed for user root
Jan 23 11:27:31 compute-0 sudo[73136]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-npgpglpnmzjegzvlgrqaetisykehdivr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167651.008077-404-119975591000315/AnsiballZ_copy.py'
Jan 23 11:27:31 compute-0 sudo[73136]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:27:32 compute-0 python3.9[73138]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769167651.008077-404-119975591000315/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:27:32 compute-0 sudo[73136]: pam_unix(sudo:session): session closed for user root
Jan 23 11:27:32 compute-0 sudo[73288]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dlforgwepnacfqefsunkbxteryehwedz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167652.2435873-419-39211834950117/AnsiballZ_stat.py'
Jan 23 11:27:32 compute-0 sudo[73288]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:27:32 compute-0 python3.9[73290]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:27:32 compute-0 sudo[73288]: pam_unix(sudo:session): session closed for user root
Jan 23 11:27:33 compute-0 sudo[73411]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rifclhqgzpsddxkmazearslzddkktnpo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167652.2435873-419-39211834950117/AnsiballZ_copy.py'
Jan 23 11:27:33 compute-0 sudo[73411]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:27:33 compute-0 python3.9[73413]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769167652.2435873-419-39211834950117/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:27:33 compute-0 sudo[73411]: pam_unix(sudo:session): session closed for user root
Jan 23 11:27:33 compute-0 sudo[73563]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tqnheahaqzxkvjbaljdgzqtajiibwqrz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167653.3368282-434-54142572341054/AnsiballZ_stat.py'
Jan 23 11:27:33 compute-0 sudo[73563]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:27:33 compute-0 python3.9[73565]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:27:33 compute-0 sudo[73563]: pam_unix(sudo:session): session closed for user root
Jan 23 11:27:34 compute-0 sudo[73686]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-woxjylpiksvguixthninyqyxcycuqjjk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167653.3368282-434-54142572341054/AnsiballZ_copy.py'
Jan 23 11:27:34 compute-0 sudo[73686]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:27:34 compute-0 python3.9[73688]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769167653.3368282-434-54142572341054/.source.nft follow=False _original_basename=ruleset.j2 checksum=15a82a0dc61abfd6aa593407582b5b950437eb80 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:27:34 compute-0 sudo[73686]: pam_unix(sudo:session): session closed for user root
Jan 23 11:27:34 compute-0 sudo[73838]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iqgnuiozfmecyozodhhcorykicfpxehi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167654.5033715-449-112962935689341/AnsiballZ_file.py'
Jan 23 11:27:34 compute-0 sudo[73838]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:27:34 compute-0 python3.9[73840]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:27:34 compute-0 sudo[73838]: pam_unix(sudo:session): session closed for user root
Jan 23 11:27:36 compute-0 sudo[73990]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ugsuxbqdgabjzpbaixcpbvrmeowhwvmb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167655.1957214-457-169639173379672/AnsiballZ_command.py'
Jan 23 11:27:36 compute-0 sudo[73990]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:27:36 compute-0 python3.9[73992]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 11:27:36 compute-0 sudo[73990]: pam_unix(sudo:session): session closed for user root
Jan 23 11:27:37 compute-0 sudo[74150]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pnjcjybypncstfgmkpuetfpxgjhuayds ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167656.8240614-465-129865314641684/AnsiballZ_blockinfile.py'
Jan 23 11:27:37 compute-0 sudo[74150]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:27:37 compute-0 python3.9[74152]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                            include "/etc/nftables/edpm-chains.nft"
                                            include "/etc/nftables/edpm-rules.nft"
                                            include "/etc/nftables/edpm-jumps.nft"
                                             path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:27:37 compute-0 sudo[74150]: pam_unix(sudo:session): session closed for user root
Jan 23 11:27:38 compute-0 sudo[74303]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fjexedmbfwyxpqjncmpnpzfnnzlwqgel ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167657.791788-474-231827496504593/AnsiballZ_file.py'
Jan 23 11:27:38 compute-0 sudo[74303]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:27:38 compute-0 python3.9[74305]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:27:38 compute-0 sudo[74303]: pam_unix(sudo:session): session closed for user root
Jan 23 11:27:38 compute-0 sudo[74455]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dvxvuotvpdvodxjdefelgcpmzckwfcqh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167658.4054265-474-185068425419193/AnsiballZ_file.py'
Jan 23 11:27:38 compute-0 sudo[74455]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:27:38 compute-0 python3.9[74457]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:27:38 compute-0 sudo[74455]: pam_unix(sudo:session): session closed for user root
Jan 23 11:27:39 compute-0 sudo[74607]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ywqydgwnkothztoxztoafdzareopjkxx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167659.250344-489-142725145292212/AnsiballZ_mount.py'
Jan 23 11:27:39 compute-0 sudo[74607]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:27:39 compute-0 python3.9[74609]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 23 11:27:39 compute-0 sudo[74607]: pam_unix(sudo:session): session closed for user root
Jan 23 11:27:39 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 23 11:27:39 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 23 11:27:40 compute-0 sudo[74761]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ihmjjicdoaskgiizwloahlovledmhfph ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167660.118972-489-89397000610969/AnsiballZ_mount.py'
Jan 23 11:27:40 compute-0 sudo[74761]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:27:40 compute-0 python3.9[74763]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 23 11:27:40 compute-0 sudo[74761]: pam_unix(sudo:session): session closed for user root
Jan 23 11:27:41 compute-0 sshd-session[65603]: Connection closed by 192.168.122.30 port 46698
Jan 23 11:27:41 compute-0 sshd-session[65600]: pam_unix(sshd:session): session closed for user zuul
Jan 23 11:27:41 compute-0 systemd-logind[798]: Session 15 logged out. Waiting for processes to exit.
Jan 23 11:27:41 compute-0 systemd[1]: session-15.scope: Deactivated successfully.
Jan 23 11:27:41 compute-0 systemd[1]: session-15.scope: Consumed 33.653s CPU time.
Jan 23 11:27:41 compute-0 systemd-logind[798]: Removed session 15.
Jan 23 11:27:46 compute-0 sshd-session[74789]: Accepted publickey for zuul from 192.168.122.30 port 42040 ssh2: ECDSA SHA256:AUEDGm/wgPOySUg5KweIs4KJvJDZMkuE7T7y2BxO92Y
Jan 23 11:27:46 compute-0 systemd-logind[798]: New session 16 of user zuul.
Jan 23 11:27:46 compute-0 systemd[1]: Started Session 16 of User zuul.
Jan 23 11:27:46 compute-0 sshd-session[74789]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 23 11:27:47 compute-0 sudo[74942]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gjhbxtnrxhcysalzybtwpgladgmyqxrx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167666.7165158-16-262309834850169/AnsiballZ_tempfile.py'
Jan 23 11:27:47 compute-0 sudo[74942]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:27:47 compute-0 python3.9[74944]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Jan 23 11:27:47 compute-0 sudo[74942]: pam_unix(sudo:session): session closed for user root
Jan 23 11:27:48 compute-0 sudo[75094]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-htctfzqweodwoglowcaojjkmkmnwpkle ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167667.7161956-28-214398545489755/AnsiballZ_stat.py'
Jan 23 11:27:48 compute-0 sudo[75094]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:27:48 compute-0 python3.9[75096]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 11:27:48 compute-0 sudo[75094]: pam_unix(sudo:session): session closed for user root
Jan 23 11:27:49 compute-0 sudo[75246]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xjhpvtqxpfprjhoxjayzwdlqhwwdiaco ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167668.4927123-38-124578804348808/AnsiballZ_setup.py'
Jan 23 11:27:49 compute-0 sudo[75246]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:27:49 compute-0 python3.9[75248]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 11:27:49 compute-0 sudo[75246]: pam_unix(sudo:session): session closed for user root
Jan 23 11:27:50 compute-0 sudo[75398]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ldbgumwdzhmauatsxhmbamebrcbvphzs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167669.6525805-47-278013835591887/AnsiballZ_blockinfile.py'
Jan 23 11:27:50 compute-0 sudo[75398]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:27:50 compute-0 python3.9[75400]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC01Sa2Ssvej+dIubGSB3ilJzSnUjMICqYEcYDDm6G21PAtQTnHyVgsASI1iSY7ZV/RDhumA3KYwh3da12ctkhewRdHLskKJ0+JpI78kUcaDDVSEnocddyDRe6TSo+E6z1VwaWhEU9nDAxfkaiVnLjW68AbILLxs9Fl7q8SNKfKsXsOMMjEB6oHgg9yMyDPniV2GDYoivDnp/GPHgnJGLwb3zIiuL8MYgKCNThaJH8Y5orHuC+kqM9uynpqb8BF1jiy1S1pujryYRBKU9q9kUu2X+H4RVlbOtd09jjnvP2QDaNFwSBqPdlunHcL8ZCV/3O1aGgycmcXdTxwym/96owmKwe8ztU7PC9kXUwrhzuB3xAQqLHNM7JyNHgNOJiJySgKCgPkQmJ6afXeM24KtWa5y+ARuDBWCVaUm3Rxo6EfQ/6run7TBkVCqhFPYHrsFo2m30Lciu1fjVZycyKDyRy+wH1agL0JAmU598nIpev34m4Q/xcaMGL17ET+2RTX9Yk=
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHcs09lC3Fzl3JXX7Ru6BCOGJRGFY3uNXTwPZqHuCIUP
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAVnHoAm4MXgoie9YBDpc3z09tj6lkXbHLEjvVKh0CbC+ceQWBbyty1dPX7sdVxq/VTOw2IFbLAxfpsWhJEP3Qc=
                                             create=True mode=0644 path=/tmp/ansible.6ykude3p state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:27:50 compute-0 sudo[75398]: pam_unix(sudo:session): session closed for user root
Jan 23 11:27:50 compute-0 sudo[75550]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wqqmpwyimjyyyjjfmnpjyiunbqkfqtpu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167670.367131-55-268678627195003/AnsiballZ_command.py'
Jan 23 11:27:50 compute-0 sudo[75550]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:27:50 compute-0 python3.9[75552]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.6ykude3p' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 11:27:50 compute-0 sudo[75550]: pam_unix(sudo:session): session closed for user root
Jan 23 11:27:51 compute-0 sudo[75704]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jrnxiexpfplxqyxhxlxwoewfzrwohgrp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167671.07901-63-3593622956164/AnsiballZ_file.py'
Jan 23 11:27:51 compute-0 sudo[75704]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:27:51 compute-0 python3.9[75706]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.6ykude3p state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:27:51 compute-0 sudo[75704]: pam_unix(sudo:session): session closed for user root
Jan 23 11:27:52 compute-0 sshd-session[74792]: Connection closed by 192.168.122.30 port 42040
Jan 23 11:27:52 compute-0 sshd-session[74789]: pam_unix(sshd:session): session closed for user zuul
Jan 23 11:27:52 compute-0 systemd[1]: session-16.scope: Deactivated successfully.
Jan 23 11:27:52 compute-0 systemd[1]: session-16.scope: Consumed 3.151s CPU time.
Jan 23 11:27:52 compute-0 systemd-logind[798]: Session 16 logged out. Waiting for processes to exit.
Jan 23 11:27:52 compute-0 systemd-logind[798]: Removed session 16.
Jan 23 11:27:52 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 23 11:27:57 compute-0 sshd-session[75734]: Accepted publickey for zuul from 192.168.122.30 port 33604 ssh2: ECDSA SHA256:AUEDGm/wgPOySUg5KweIs4KJvJDZMkuE7T7y2BxO92Y
Jan 23 11:27:57 compute-0 systemd-logind[798]: New session 17 of user zuul.
Jan 23 11:27:57 compute-0 systemd[1]: Started Session 17 of User zuul.
Jan 23 11:27:57 compute-0 sshd-session[75734]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 23 11:27:58 compute-0 python3.9[75887]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 11:27:59 compute-0 sudo[76041]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-npevkmbwzycnhcerbjjiumkoqyzepbyh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167679.117829-27-3382856032862/AnsiballZ_systemd.py'
Jan 23 11:27:59 compute-0 sudo[76041]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:28:00 compute-0 python3.9[76043]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Jan 23 11:28:00 compute-0 sudo[76041]: pam_unix(sudo:session): session closed for user root
Jan 23 11:28:00 compute-0 sudo[76195]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-afzgjwtizxrinnaogqqsgrfkwssxoijo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167680.2715602-35-240331122594815/AnsiballZ_systemd.py'
Jan 23 11:28:00 compute-0 sudo[76195]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:28:00 compute-0 python3.9[76197]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 23 11:28:00 compute-0 sudo[76195]: pam_unix(sudo:session): session closed for user root
Jan 23 11:28:01 compute-0 sudo[76348]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-isjtdzhuyydtlncpfruhdncpnowklpai ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167681.1558661-44-23783390710525/AnsiballZ_command.py'
Jan 23 11:28:01 compute-0 sudo[76348]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:28:01 compute-0 python3.9[76350]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 11:28:01 compute-0 sudo[76348]: pam_unix(sudo:session): session closed for user root
Jan 23 11:28:02 compute-0 sudo[76501]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uwowcicitgmlwobtbjyajjprpnwhimxs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167681.974986-52-271369861593619/AnsiballZ_stat.py'
Jan 23 11:28:02 compute-0 sudo[76501]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:28:02 compute-0 python3.9[76503]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 11:28:02 compute-0 sudo[76501]: pam_unix(sudo:session): session closed for user root
Jan 23 11:28:03 compute-0 sudo[76655]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-foagpibbbtwzydmguzqdfktekalgwcbt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167682.750725-60-90447511651306/AnsiballZ_command.py'
Jan 23 11:28:03 compute-0 sudo[76655]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:28:03 compute-0 python3.9[76657]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 11:28:03 compute-0 sudo[76655]: pam_unix(sudo:session): session closed for user root
Jan 23 11:28:04 compute-0 sudo[76810]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mpxlgzpsmjutsuoehffdfnojadofgayi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167683.6690073-68-59791010469694/AnsiballZ_file.py'
Jan 23 11:28:04 compute-0 sudo[76810]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:28:04 compute-0 python3.9[76812]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:28:04 compute-0 sudo[76810]: pam_unix(sudo:session): session closed for user root
Jan 23 11:28:04 compute-0 sshd-session[75737]: Connection closed by 192.168.122.30 port 33604
Jan 23 11:28:04 compute-0 sshd-session[75734]: pam_unix(sshd:session): session closed for user zuul
Jan 23 11:28:04 compute-0 systemd[1]: session-17.scope: Deactivated successfully.
Jan 23 11:28:04 compute-0 systemd[1]: session-17.scope: Consumed 4.677s CPU time.
Jan 23 11:28:04 compute-0 systemd-logind[798]: Session 17 logged out. Waiting for processes to exit.
Jan 23 11:28:04 compute-0 systemd-logind[798]: Removed session 17.
Jan 23 11:28:09 compute-0 sshd-session[76839]: Accepted publickey for zuul from 192.168.122.30 port 54334 ssh2: ECDSA SHA256:AUEDGm/wgPOySUg5KweIs4KJvJDZMkuE7T7y2BxO92Y
Jan 23 11:28:09 compute-0 systemd-logind[798]: New session 18 of user zuul.
Jan 23 11:28:09 compute-0 systemd[1]: Started Session 18 of User zuul.
Jan 23 11:28:09 compute-0 sshd-session[76839]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 23 11:28:10 compute-0 sshd-session[76837]: Invalid user solana from 193.32.162.146 port 50054
Jan 23 11:28:10 compute-0 sshd-session[76837]: Connection closed by invalid user solana 193.32.162.146 port 50054 [preauth]
Jan 23 11:28:10 compute-0 python3.9[76992]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 11:28:11 compute-0 sudo[77146]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bvyggssnkjyujcgavjsilxqahwtpgvip ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167691.4088323-29-120875954059630/AnsiballZ_setup.py'
Jan 23 11:28:11 compute-0 sudo[77146]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:28:12 compute-0 python3.9[77148]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 23 11:28:12 compute-0 sudo[77146]: pam_unix(sudo:session): session closed for user root
Jan 23 11:28:12 compute-0 sudo[77230]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xuzcrvafgnnpomeaitadakfahxzyhkhv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167691.4088323-29-120875954059630/AnsiballZ_dnf.py'
Jan 23 11:28:12 compute-0 sudo[77230]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:28:12 compute-0 python3.9[77232]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 23 11:28:14 compute-0 sudo[77230]: pam_unix(sudo:session): session closed for user root
Jan 23 11:28:15 compute-0 python3.9[77383]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 11:28:16 compute-0 python3.9[77534]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 23 11:28:17 compute-0 python3.9[77684]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 11:28:17 compute-0 python3.9[77834]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 11:28:18 compute-0 sshd-session[76842]: Connection closed by 192.168.122.30 port 54334
Jan 23 11:28:18 compute-0 sshd-session[76839]: pam_unix(sshd:session): session closed for user zuul
Jan 23 11:28:18 compute-0 systemd[1]: session-18.scope: Deactivated successfully.
Jan 23 11:28:18 compute-0 systemd[1]: session-18.scope: Consumed 6.001s CPU time.
Jan 23 11:28:18 compute-0 systemd-logind[798]: Session 18 logged out. Waiting for processes to exit.
Jan 23 11:28:18 compute-0 systemd-logind[798]: Removed session 18.
Jan 23 11:28:36 compute-0 sshd-session[77859]: Accepted publickey for zuul from 192.168.122.30 port 40134 ssh2: ECDSA SHA256:AUEDGm/wgPOySUg5KweIs4KJvJDZMkuE7T7y2BxO92Y
Jan 23 11:28:36 compute-0 systemd-logind[798]: New session 19 of user zuul.
Jan 23 11:28:36 compute-0 systemd[1]: Started Session 19 of User zuul.
Jan 23 11:28:36 compute-0 sshd-session[77859]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 23 11:28:37 compute-0 python3.9[78012]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 11:28:40 compute-0 sudo[78166]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bafsylcvdfgewghqxcxrafmccfozxcjz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167720.0059397-45-255651675614258/AnsiballZ_file.py'
Jan 23 11:28:40 compute-0 sudo[78166]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:28:40 compute-0 python3.9[78168]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry-power-monitoring/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 11:28:40 compute-0 sudo[78166]: pam_unix(sudo:session): session closed for user root
Jan 23 11:28:40 compute-0 sudo[78318]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aibhggmrvfoxxravokiveovmelmgnblq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167720.7686212-45-122249297142358/AnsiballZ_file.py'
Jan 23 11:28:40 compute-0 sudo[78318]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:28:41 compute-0 python3.9[78320]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry-power-monitoring/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 11:28:41 compute-0 sudo[78318]: pam_unix(sudo:session): session closed for user root
Jan 23 11:28:42 compute-0 sudo[78470]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lglcapxtsmivttxhwezayzfaytytacga ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167721.7308025-60-63941681014855/AnsiballZ_stat.py'
Jan 23 11:28:42 compute-0 sudo[78470]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:28:42 compute-0 python3.9[78472]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:28:42 compute-0 sudo[78470]: pam_unix(sudo:session): session closed for user root
Jan 23 11:28:42 compute-0 sudo[78593]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ikxqmpazokzxdhjwkzgvtmlilkzqkatf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167721.7308025-60-63941681014855/AnsiballZ_copy.py'
Jan 23 11:28:42 compute-0 sudo[78593]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:28:43 compute-0 python3.9[78595]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769167721.7308025-60-63941681014855/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=017dbf5c1a1fc158c06d1d5c14b038a86f49a711 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:28:43 compute-0 sudo[78593]: pam_unix(sudo:session): session closed for user root
Jan 23 11:28:43 compute-0 sudo[78745]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lryoedtsgkfslhttlwlkcvateijkeuig ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167723.1605487-60-1025700541120/AnsiballZ_stat.py'
Jan 23 11:28:43 compute-0 sudo[78745]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:28:43 compute-0 python3.9[78747]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:28:43 compute-0 sudo[78745]: pam_unix(sudo:session): session closed for user root
Jan 23 11:28:43 compute-0 sudo[78868]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oajakpkamorebqavitmctuymoxiayfiv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167723.1605487-60-1025700541120/AnsiballZ_copy.py'
Jan 23 11:28:43 compute-0 sudo[78868]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:28:44 compute-0 python3.9[78870]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry-power-monitoring/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769167723.1605487-60-1025700541120/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=6d81ce9bc6ff0a5be477e31c740936ad3232b8bd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:28:44 compute-0 sudo[78868]: pam_unix(sudo:session): session closed for user root
Jan 23 11:28:44 compute-0 sudo[79020]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vpedggowsyezyhzyfoyqallnlukbqxwb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167724.330017-60-105672420850895/AnsiballZ_stat.py'
Jan 23 11:28:44 compute-0 sudo[79020]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:28:44 compute-0 python3.9[79022]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:28:44 compute-0 sudo[79020]: pam_unix(sudo:session): session closed for user root
Jan 23 11:28:45 compute-0 sudo[79143]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wayrlazqkhmxzwtcspwrjtactsqvsbzx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167724.330017-60-105672420850895/AnsiballZ_copy.py'
Jan 23 11:28:45 compute-0 sudo[79143]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:28:45 compute-0 python3.9[79145]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769167724.330017-60-105672420850895/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=d55faadbc1b9feeef0cf30137eff9c62b0c0e7ae backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:28:45 compute-0 sudo[79143]: pam_unix(sudo:session): session closed for user root
Jan 23 11:28:46 compute-0 sudo[79295]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qxsskvbjdfalpqnxtlqpwkgboixpigcz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167726.29161-104-148953642042400/AnsiballZ_file.py'
Jan 23 11:28:46 compute-0 sudo[79295]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:28:46 compute-0 python3.9[79297]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 11:28:46 compute-0 sudo[79295]: pam_unix(sudo:session): session closed for user root
Jan 23 11:28:47 compute-0 sudo[79447]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kqdvjiszjyubelpwovumjalhsofcztqj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167727.022218-104-202188287041245/AnsiballZ_file.py'
Jan 23 11:28:47 compute-0 sudo[79447]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:28:47 compute-0 python3.9[79449]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 11:28:47 compute-0 sudo[79447]: pam_unix(sudo:session): session closed for user root
Jan 23 11:28:48 compute-0 sudo[79599]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jznkrhwktgtrizlwrqofqmcbsykwamtz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167727.8512158-119-56230626789195/AnsiballZ_stat.py'
Jan 23 11:28:48 compute-0 sudo[79599]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:28:48 compute-0 python3.9[79601]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:28:48 compute-0 sudo[79599]: pam_unix(sudo:session): session closed for user root
Jan 23 11:28:48 compute-0 sudo[79722]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdkozjogqkfeymgzoqqsjlkibgdwtktd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167727.8512158-119-56230626789195/AnsiballZ_copy.py'
Jan 23 11:28:48 compute-0 sudo[79722]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:28:48 compute-0 python3.9[79724]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769167727.8512158-119-56230626789195/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=eb1a99dd1b5dca1e9244e0ddb72b4b984db69afc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:28:48 compute-0 sudo[79722]: pam_unix(sudo:session): session closed for user root
Jan 23 11:28:49 compute-0 sudo[79874]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oekednjyvmivlaubfrgzpwyxtwuvgybz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167729.113446-119-173435851469024/AnsiballZ_stat.py'
Jan 23 11:28:49 compute-0 sudo[79874]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:28:49 compute-0 python3.9[79876]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:28:49 compute-0 sudo[79874]: pam_unix(sudo:session): session closed for user root
Jan 23 11:28:49 compute-0 sudo[79997]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lzhhrbjvaxylnoxzbnxzvmfnmruhmlrc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167729.113446-119-173435851469024/AnsiballZ_copy.py'
Jan 23 11:28:49 compute-0 sudo[79997]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:28:50 compute-0 python3.9[79999]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769167729.113446-119-173435851469024/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=6d81ce9bc6ff0a5be477e31c740936ad3232b8bd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:28:50 compute-0 sudo[79997]: pam_unix(sudo:session): session closed for user root
Jan 23 11:28:50 compute-0 sudo[80149]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jkkevyqpjgjggidwkanwzeepbnipnoqp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167730.2564526-119-115500873953952/AnsiballZ_stat.py'
Jan 23 11:28:50 compute-0 sudo[80149]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:28:50 compute-0 python3.9[80151]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:28:50 compute-0 sudo[80149]: pam_unix(sudo:session): session closed for user root
Jan 23 11:28:51 compute-0 sudo[80272]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kyhxgsvngyndzsvnczvfcisttcdszvhh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167730.2564526-119-115500873953952/AnsiballZ_copy.py'
Jan 23 11:28:51 compute-0 sudo[80272]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:28:51 compute-0 python3.9[80274]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769167730.2564526-119-115500873953952/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=278346dfa94fb25e51c5706313ed979459450208 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:28:51 compute-0 sudo[80272]: pam_unix(sudo:session): session closed for user root
Jan 23 11:28:51 compute-0 sudo[80424]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xfoqknffywnovlgevpzfmwubdnerglqc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167731.5462599-163-258329356927255/AnsiballZ_file.py'
Jan 23 11:28:51 compute-0 sudo[80424]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:28:52 compute-0 python3.9[80426]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 11:28:52 compute-0 sudo[80424]: pam_unix(sudo:session): session closed for user root
Jan 23 11:28:52 compute-0 sudo[80576]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rtzdqxeillxkmzjtyakdwloxptxpjsyw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167732.2566912-163-13037612495587/AnsiballZ_file.py'
Jan 23 11:28:52 compute-0 sudo[80576]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:28:52 compute-0 python3.9[80578]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 11:28:52 compute-0 sudo[80576]: pam_unix(sudo:session): session closed for user root
Jan 23 11:28:53 compute-0 sudo[80728]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vtgoaatqbpphafzwzainheqgbeleshkq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167732.8941197-178-10599740638055/AnsiballZ_stat.py'
Jan 23 11:28:53 compute-0 sudo[80728]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:28:53 compute-0 python3.9[80730]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:28:53 compute-0 sudo[80728]: pam_unix(sudo:session): session closed for user root
Jan 23 11:28:53 compute-0 sudo[80851]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ulxfokabazymdokigcpscxrjjwhellmk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167732.8941197-178-10599740638055/AnsiballZ_copy.py'
Jan 23 11:28:53 compute-0 sudo[80851]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:28:53 compute-0 python3.9[80853]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769167732.8941197-178-10599740638055/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=f2dc67f169326890e264fd15d2cb79e38606e8a1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:28:53 compute-0 sudo[80851]: pam_unix(sudo:session): session closed for user root
Jan 23 11:28:54 compute-0 sudo[81003]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gekucbjpkbrqsljzdwdenvjiqqfiagpo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167734.0982382-178-153788658671363/AnsiballZ_stat.py'
Jan 23 11:28:54 compute-0 sudo[81003]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:28:54 compute-0 python3.9[81005]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:28:54 compute-0 sudo[81003]: pam_unix(sudo:session): session closed for user root
Jan 23 11:28:54 compute-0 chronyd[65574]: Selected source 23.133.168.245 (pool.ntp.org)
Jan 23 11:28:54 compute-0 sudo[81126]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hoegfuyzbrrckelnjusizxvhpnffxcni ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167734.0982382-178-153788658671363/AnsiballZ_copy.py'
Jan 23 11:28:54 compute-0 sudo[81126]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:28:55 compute-0 python3.9[81128]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769167734.0982382-178-153788658671363/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=16221c2ae00bb53fb5195442fa3e59eeacd9261d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:28:55 compute-0 sudo[81126]: pam_unix(sudo:session): session closed for user root
Jan 23 11:28:55 compute-0 sudo[81278]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mggbsulxscmvavpgffxpcvifdlpxcxvd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167735.3965235-178-249252890453432/AnsiballZ_stat.py'
Jan 23 11:28:55 compute-0 sudo[81278]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:28:55 compute-0 python3.9[81280]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:28:55 compute-0 sudo[81278]: pam_unix(sudo:session): session closed for user root
Jan 23 11:28:56 compute-0 sudo[81401]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ytxxjvxgymeipzzmizzqkhgmlynkreft ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167735.3965235-178-249252890453432/AnsiballZ_copy.py'
Jan 23 11:28:56 compute-0 sudo[81401]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:28:56 compute-0 python3.9[81403]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769167735.3965235-178-249252890453432/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=3f11eab4b258d3be95a311302936904a433089ca backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:28:56 compute-0 sudo[81401]: pam_unix(sudo:session): session closed for user root
Jan 23 11:28:56 compute-0 sudo[81553]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yxhfdqwgcllijqrdgjascffesvjmqtzw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167736.5660253-222-234467032657591/AnsiballZ_file.py'
Jan 23 11:28:56 compute-0 sudo[81553]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:28:57 compute-0 python3.9[81555]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 11:28:57 compute-0 sudo[81553]: pam_unix(sudo:session): session closed for user root
Jan 23 11:28:57 compute-0 sudo[81705]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-irfbahbusfsmrcvufmrywjgsvetbgyif ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167737.1861129-222-255876677167735/AnsiballZ_file.py'
Jan 23 11:28:57 compute-0 sudo[81705]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:28:57 compute-0 python3.9[81707]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 11:28:57 compute-0 sudo[81705]: pam_unix(sudo:session): session closed for user root
Jan 23 11:28:58 compute-0 sudo[81857]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-csqqplncugjcewgtrmcnfxhqvhoapmiw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167737.8042994-237-133790685049719/AnsiballZ_stat.py'
Jan 23 11:28:58 compute-0 sudo[81857]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:28:58 compute-0 python3.9[81859]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:28:58 compute-0 sudo[81857]: pam_unix(sudo:session): session closed for user root
Jan 23 11:28:58 compute-0 sudo[81980]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zufdkxozyqgfeyltuocdbdghvvabhrwr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167737.8042994-237-133790685049719/AnsiballZ_copy.py'
Jan 23 11:28:58 compute-0 sudo[81980]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:28:58 compute-0 python3.9[81982]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769167737.8042994-237-133790685049719/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=ed025c371f79b1af25fc90c744c18c42b738dde5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:28:58 compute-0 sudo[81980]: pam_unix(sudo:session): session closed for user root
Jan 23 11:28:59 compute-0 sudo[82132]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-umnzoqrcojoklpqzlvcwtlcppntvxpxy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167738.8737957-237-161758478895838/AnsiballZ_stat.py'
Jan 23 11:28:59 compute-0 sudo[82132]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:28:59 compute-0 python3.9[82134]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:28:59 compute-0 sudo[82132]: pam_unix(sudo:session): session closed for user root
Jan 23 11:28:59 compute-0 sudo[82255]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wunreohezelnnaadxkfnuzhjbvkbwwqq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167738.8737957-237-161758478895838/AnsiballZ_copy.py'
Jan 23 11:28:59 compute-0 sudo[82255]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:28:59 compute-0 python3.9[82257]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769167738.8737957-237-161758478895838/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=7698c3a82c228b48d473ea8e5f03bb96893b9289 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:28:59 compute-0 sudo[82255]: pam_unix(sudo:session): session closed for user root
Jan 23 11:29:00 compute-0 sudo[82407]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sozaauxeyzadkhjcjphsxdrvyarbonzs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167740.0001957-237-124029495256950/AnsiballZ_stat.py'
Jan 23 11:29:00 compute-0 sudo[82407]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:29:00 compute-0 python3.9[82409]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:29:00 compute-0 sudo[82407]: pam_unix(sudo:session): session closed for user root
Jan 23 11:29:00 compute-0 sudo[82530]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pkasowbfreebnarzgikzbdiytijeinrb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167740.0001957-237-124029495256950/AnsiballZ_copy.py'
Jan 23 11:29:00 compute-0 sudo[82530]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:29:00 compute-0 python3.9[82532]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769167740.0001957-237-124029495256950/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=2c2cfeaf695329c15b2f693a8921e7f073e02d60 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:29:00 compute-0 sudo[82530]: pam_unix(sudo:session): session closed for user root
Jan 23 11:29:01 compute-0 sudo[82682]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xedhyrsdnmryejmbmtwenopgqbozjjly ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167741.1278296-281-57373456479425/AnsiballZ_file.py'
Jan 23 11:29:01 compute-0 sudo[82682]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:29:01 compute-0 python3.9[82684]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 11:29:01 compute-0 sudo[82682]: pam_unix(sudo:session): session closed for user root
Jan 23 11:29:01 compute-0 sudo[82834]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qgtoalebqnwkfevycwjckwugrqaaspzv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167741.6608713-281-72574853817146/AnsiballZ_file.py'
Jan 23 11:29:01 compute-0 sudo[82834]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:29:02 compute-0 python3.9[82836]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 11:29:02 compute-0 sudo[82834]: pam_unix(sudo:session): session closed for user root
Jan 23 11:29:02 compute-0 sudo[82986]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uwrvapzfsythxiayzfrpjleyehxsiclr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167742.3131857-296-104997221376687/AnsiballZ_stat.py'
Jan 23 11:29:02 compute-0 sudo[82986]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:29:02 compute-0 python3.9[82988]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:29:02 compute-0 sudo[82986]: pam_unix(sudo:session): session closed for user root
Jan 23 11:29:03 compute-0 sudo[83110]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jbueqqbjqqwvyyhnyfmmqzbbpkaqvglk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167742.3131857-296-104997221376687/AnsiballZ_copy.py'
Jan 23 11:29:03 compute-0 sudo[83110]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:29:03 compute-0 python3.9[83112]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769167742.3131857-296-104997221376687/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=f37fc2a6882f11050ac3e4c910b22de9b26dda8b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:29:03 compute-0 sudo[83110]: pam_unix(sudo:session): session closed for user root
Jan 23 11:29:04 compute-0 sudo[83262]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zxddjemqavefybqiqeznxawbsswdfzez ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167743.6535022-296-139464610219392/AnsiballZ_stat.py'
Jan 23 11:29:04 compute-0 sudo[83262]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:29:04 compute-0 python3.9[83264]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:29:04 compute-0 sudo[83262]: pam_unix(sudo:session): session closed for user root
Jan 23 11:29:04 compute-0 sudo[83385]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tocioutbxemwfhcndnksuoknbniumppv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167743.6535022-296-139464610219392/AnsiballZ_copy.py'
Jan 23 11:29:04 compute-0 sudo[83385]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:29:04 compute-0 python3.9[83387]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769167743.6535022-296-139464610219392/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=16221c2ae00bb53fb5195442fa3e59eeacd9261d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:29:04 compute-0 sudo[83385]: pam_unix(sudo:session): session closed for user root
Jan 23 11:29:05 compute-0 sudo[83537]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gglyjgeppcgrntsgtyhpspywjfkwciqt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167745.138989-296-98666659271345/AnsiballZ_stat.py'
Jan 23 11:29:05 compute-0 sudo[83537]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:29:05 compute-0 python3.9[83539]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:29:05 compute-0 sudo[83537]: pam_unix(sudo:session): session closed for user root
Jan 23 11:29:06 compute-0 sudo[83660]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lgaynqjtxyvqmisecbepecatkgtlchaj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167745.138989-296-98666659271345/AnsiballZ_copy.py'
Jan 23 11:29:06 compute-0 sudo[83660]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:29:06 compute-0 python3.9[83662]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769167745.138989-296-98666659271345/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=4ce07cd1b619ad6370fbec88fff491cea85d4b3b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:29:06 compute-0 sudo[83660]: pam_unix(sudo:session): session closed for user root
Jan 23 11:29:07 compute-0 sudo[83812]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wjnsejrvwyjslhetaudnotcgqrrqcyro ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167746.905689-356-243355524361581/AnsiballZ_file.py'
Jan 23 11:29:07 compute-0 sudo[83812]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:29:07 compute-0 python3.9[83814]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 11:29:07 compute-0 sudo[83812]: pam_unix(sudo:session): session closed for user root
Jan 23 11:29:07 compute-0 sudo[83964]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pvnqkuemfpoerivxqkdxnnxqmxohlenj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167747.616759-364-95815986607271/AnsiballZ_stat.py'
Jan 23 11:29:07 compute-0 sudo[83964]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:29:08 compute-0 python3.9[83966]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:29:08 compute-0 sudo[83964]: pam_unix(sudo:session): session closed for user root
Jan 23 11:29:08 compute-0 sudo[84087]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uizfvzdaelnzedaedinlzafuyzkggphv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167747.616759-364-95815986607271/AnsiballZ_copy.py'
Jan 23 11:29:08 compute-0 sudo[84087]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:29:08 compute-0 python3.9[84089]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769167747.616759-364-95815986607271/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=7ae6045ec33785a1ca6529615216791fa27be4f7 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:29:08 compute-0 sudo[84087]: pam_unix(sudo:session): session closed for user root
Jan 23 11:29:09 compute-0 sudo[84239]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-intffvsuzmqawrbfhlxrqafvbfxoepuy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167749.0175226-380-192554724086446/AnsiballZ_file.py'
Jan 23 11:29:09 compute-0 sudo[84239]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:29:09 compute-0 python3.9[84241]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 11:29:09 compute-0 sudo[84239]: pam_unix(sudo:session): session closed for user root
Jan 23 11:29:09 compute-0 sudo[84391]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-czscfgrmngzhjlbmaeqphhbkeoadtseu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167749.7360618-388-167745670190633/AnsiballZ_stat.py'
Jan 23 11:29:09 compute-0 sudo[84391]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:29:10 compute-0 python3.9[84393]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:29:10 compute-0 sudo[84391]: pam_unix(sudo:session): session closed for user root
Jan 23 11:29:10 compute-0 sudo[84514]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uyhjmymzcpgrusyjjrncmnjdbiiitwjq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167749.7360618-388-167745670190633/AnsiballZ_copy.py'
Jan 23 11:29:10 compute-0 sudo[84514]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:29:10 compute-0 python3.9[84516]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769167749.7360618-388-167745670190633/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=7ae6045ec33785a1ca6529615216791fa27be4f7 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:29:10 compute-0 sudo[84514]: pam_unix(sudo:session): session closed for user root
Jan 23 11:29:11 compute-0 sudo[84666]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hnvdoqlnatbsfszzkigxzjkuefhwlqhf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167751.0036154-404-12732708161874/AnsiballZ_file.py'
Jan 23 11:29:11 compute-0 sudo[84666]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:29:11 compute-0 python3.9[84668]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 11:29:11 compute-0 sudo[84666]: pam_unix(sudo:session): session closed for user root
Jan 23 11:29:11 compute-0 sudo[84818]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-asmtsuagixgvslhjqzeveshexsnwijvo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167751.593357-412-241257900120740/AnsiballZ_stat.py'
Jan 23 11:29:11 compute-0 sudo[84818]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:29:12 compute-0 python3.9[84820]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:29:12 compute-0 sudo[84818]: pam_unix(sudo:session): session closed for user root
Jan 23 11:29:12 compute-0 sudo[84941]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kwspkgfrkvmvqncfvbqmxnxltozhwudv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167751.593357-412-241257900120740/AnsiballZ_copy.py'
Jan 23 11:29:12 compute-0 sudo[84941]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:29:12 compute-0 python3.9[84943]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769167751.593357-412-241257900120740/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=7ae6045ec33785a1ca6529615216791fa27be4f7 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:29:12 compute-0 sudo[84941]: pam_unix(sudo:session): session closed for user root
Jan 23 11:29:13 compute-0 sudo[85093]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nmnbvuiygbkqpnpjscejmhrqitccgruw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167752.972386-428-8629619508323/AnsiballZ_file.py'
Jan 23 11:29:13 compute-0 sudo[85093]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:29:13 compute-0 python3.9[85095]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 11:29:13 compute-0 sudo[85093]: pam_unix(sudo:session): session closed for user root
Jan 23 11:29:13 compute-0 sudo[85245]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yaibbghtmbouhhurmvvvvnqhrjhvqwis ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167753.5812645-436-253139695231154/AnsiballZ_stat.py'
Jan 23 11:29:13 compute-0 sudo[85245]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:29:13 compute-0 python3.9[85247]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:29:13 compute-0 sudo[85245]: pam_unix(sudo:session): session closed for user root
Jan 23 11:29:14 compute-0 sudo[85368]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sgckujpiwdblkjqabxkzyunijhuyftbn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167753.5812645-436-253139695231154/AnsiballZ_copy.py'
Jan 23 11:29:14 compute-0 sudo[85368]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:29:14 compute-0 python3.9[85370]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769167753.5812645-436-253139695231154/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=7ae6045ec33785a1ca6529615216791fa27be4f7 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:29:14 compute-0 sudo[85368]: pam_unix(sudo:session): session closed for user root
Jan 23 11:29:15 compute-0 sudo[85520]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nddppgtdepokrovliearsaykvyikobtd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167754.7240117-452-225792399818022/AnsiballZ_file.py'
Jan 23 11:29:15 compute-0 sudo[85520]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:29:15 compute-0 python3.9[85522]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/telemetry setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 11:29:15 compute-0 sudo[85520]: pam_unix(sudo:session): session closed for user root
Jan 23 11:29:15 compute-0 sudo[85672]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mhwylshfultpnguusfjjlifhsavtxchx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167755.530196-460-213846959188109/AnsiballZ_stat.py'
Jan 23 11:29:15 compute-0 sudo[85672]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:29:15 compute-0 python3.9[85674]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:29:15 compute-0 sudo[85672]: pam_unix(sudo:session): session closed for user root
Jan 23 11:29:16 compute-0 sudo[85795]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jqyjltaicmszzbooxiyparzwsgdbpnua ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167755.530196-460-213846959188109/AnsiballZ_copy.py'
Jan 23 11:29:16 compute-0 sudo[85795]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:29:16 compute-0 python3.9[85797]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769167755.530196-460-213846959188109/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=7ae6045ec33785a1ca6529615216791fa27be4f7 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:29:16 compute-0 sudo[85795]: pam_unix(sudo:session): session closed for user root
Jan 23 11:29:16 compute-0 sudo[85947]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-emchxdjoibbjbvwszhsyacysljjsawhh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167756.7047575-476-245508163771732/AnsiballZ_file.py'
Jan 23 11:29:16 compute-0 sudo[85947]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:29:17 compute-0 python3.9[85949]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 11:29:17 compute-0 sudo[85947]: pam_unix(sudo:session): session closed for user root
Jan 23 11:29:17 compute-0 sudo[86099]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjytxqfjxvolvvarsplovvuygbrlvaho ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167757.2826862-484-201709500514535/AnsiballZ_stat.py'
Jan 23 11:29:17 compute-0 sudo[86099]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:29:17 compute-0 python3.9[86101]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:29:17 compute-0 sudo[86099]: pam_unix(sudo:session): session closed for user root
Jan 23 11:29:18 compute-0 sudo[86222]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xezilwxgcrqbtwcxizztgtyhbsnhnowc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167757.2826862-484-201709500514535/AnsiballZ_copy.py'
Jan 23 11:29:18 compute-0 sudo[86222]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:29:18 compute-0 python3.9[86224]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769167757.2826862-484-201709500514535/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=7ae6045ec33785a1ca6529615216791fa27be4f7 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:29:18 compute-0 sudo[86222]: pam_unix(sudo:session): session closed for user root
Jan 23 11:29:18 compute-0 sudo[86374]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bwzmtvbugjvanyejavihpyqdkplkcvyz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167758.5459719-500-167532803630502/AnsiballZ_file.py'
Jan 23 11:29:18 compute-0 sudo[86374]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:29:18 compute-0 python3.9[86376]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 11:29:18 compute-0 sudo[86374]: pam_unix(sudo:session): session closed for user root
Jan 23 11:29:19 compute-0 sudo[86526]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vcrbgemladjadumrivognliioobqkfsn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167759.1340334-508-60212258314874/AnsiballZ_stat.py'
Jan 23 11:29:19 compute-0 sudo[86526]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:29:19 compute-0 python3.9[86528]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:29:19 compute-0 sudo[86526]: pam_unix(sudo:session): session closed for user root
Jan 23 11:29:20 compute-0 sudo[86649]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-taqzeupiafecokmaqlbjnlmkcdstmwgc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167759.1340334-508-60212258314874/AnsiballZ_copy.py'
Jan 23 11:29:20 compute-0 sudo[86649]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:29:20 compute-0 python3.9[86651]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769167759.1340334-508-60212258314874/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=7ae6045ec33785a1ca6529615216791fa27be4f7 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:29:20 compute-0 sudo[86649]: pam_unix(sudo:session): session closed for user root
Jan 23 11:29:20 compute-0 sudo[86801]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-napvmzjfdbaautnazgpgnwyfmeiphsam ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167760.5089467-524-91563730286168/AnsiballZ_file.py'
Jan 23 11:29:20 compute-0 sudo[86801]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:29:20 compute-0 python3.9[86803]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/telemetry-power-monitoring setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 11:29:21 compute-0 sudo[86801]: pam_unix(sudo:session): session closed for user root
Jan 23 11:29:21 compute-0 sudo[86953]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-scqqyprbdkaksbdkhjvblcucqmzetqhw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167761.1856647-532-96434936071026/AnsiballZ_stat.py'
Jan 23 11:29:21 compute-0 sudo[86953]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:29:21 compute-0 python3.9[86955]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:29:21 compute-0 sudo[86953]: pam_unix(sudo:session): session closed for user root
Jan 23 11:29:22 compute-0 sudo[87076]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wciqtpgzojeejmyhasttvusegfjgoydi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167761.1856647-532-96434936071026/AnsiballZ_copy.py'
Jan 23 11:29:22 compute-0 sudo[87076]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:29:22 compute-0 python3.9[87078]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769167761.1856647-532-96434936071026/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=7ae6045ec33785a1ca6529615216791fa27be4f7 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:29:22 compute-0 sudo[87076]: pam_unix(sudo:session): session closed for user root
Jan 23 11:29:22 compute-0 sshd-session[77862]: Connection closed by 192.168.122.30 port 40134
Jan 23 11:29:22 compute-0 sshd-session[77859]: pam_unix(sshd:session): session closed for user zuul
Jan 23 11:29:22 compute-0 systemd[1]: session-19.scope: Deactivated successfully.
Jan 23 11:29:22 compute-0 systemd[1]: session-19.scope: Consumed 34.324s CPU time.
Jan 23 11:29:22 compute-0 systemd-logind[798]: Session 19 logged out. Waiting for processes to exit.
Jan 23 11:29:22 compute-0 systemd-logind[798]: Removed session 19.
Jan 23 11:29:28 compute-0 sshd-session[87103]: Accepted publickey for zuul from 192.168.122.30 port 40868 ssh2: ECDSA SHA256:AUEDGm/wgPOySUg5KweIs4KJvJDZMkuE7T7y2BxO92Y
Jan 23 11:29:28 compute-0 systemd-logind[798]: New session 20 of user zuul.
Jan 23 11:29:28 compute-0 systemd[1]: Started Session 20 of User zuul.
Jan 23 11:29:28 compute-0 sshd-session[87103]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 23 11:29:29 compute-0 python3.9[87256]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 11:29:30 compute-0 sudo[87410]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kqhdqrllcckbwhgyfpgybdjcydfgippl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167769.5810235-29-91653726583803/AnsiballZ_file.py'
Jan 23 11:29:30 compute-0 sudo[87410]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:29:30 compute-0 python3.9[87412]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 11:29:30 compute-0 sudo[87410]: pam_unix(sudo:session): session closed for user root
Jan 23 11:29:30 compute-0 sudo[87562]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xvpyivejqwhfkdhgoqacphnicixogxlt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167770.4071841-29-94597763710306/AnsiballZ_file.py'
Jan 23 11:29:30 compute-0 sudo[87562]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:29:30 compute-0 python3.9[87564]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 23 11:29:30 compute-0 sudo[87562]: pam_unix(sudo:session): session closed for user root
Jan 23 11:29:31 compute-0 python3.9[87714]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 11:29:32 compute-0 sudo[87864]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pwudniwejugfcdbuvginvkspkfaethvg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167771.8981214-52-262167436694144/AnsiballZ_seboolean.py'
Jan 23 11:29:32 compute-0 sudo[87864]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:29:32 compute-0 python3.9[87866]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Jan 23 11:29:33 compute-0 sudo[87864]: pam_unix(sudo:session): session closed for user root
Jan 23 11:29:34 compute-0 sudo[88020]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dldkuwcvtwrkpperzvrbkbvnameulvdx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167774.3749669-62-22511613509217/AnsiballZ_setup.py'
Jan 23 11:29:34 compute-0 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Jan 23 11:29:34 compute-0 sudo[88020]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:29:34 compute-0 python3.9[88022]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 23 11:29:35 compute-0 sudo[88020]: pam_unix(sudo:session): session closed for user root
Jan 23 11:29:35 compute-0 sudo[88104]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-otmoxkebxekjeuibwomqnxjfpmlyqrve ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167774.3749669-62-22511613509217/AnsiballZ_dnf.py'
Jan 23 11:29:35 compute-0 sudo[88104]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:29:36 compute-0 python3.9[88106]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 23 11:29:37 compute-0 sudo[88104]: pam_unix(sudo:session): session closed for user root
Jan 23 11:29:38 compute-0 sudo[88257]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxzcrthnzywmuaiepqzsftjdsxjmyxlz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167777.8757703-74-176098922291990/AnsiballZ_systemd.py'
Jan 23 11:29:38 compute-0 sudo[88257]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:29:38 compute-0 python3.9[88259]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 23 11:29:38 compute-0 sudo[88257]: pam_unix(sudo:session): session closed for user root
Jan 23 11:29:39 compute-0 sudo[88412]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cwbvfldchhixkcbsicsvgpyromssipav ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769167779.0018933-82-97871960803145/AnsiballZ_edpm_nftables_snippet.py'
Jan 23 11:29:39 compute-0 sudo[88412]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:29:39 compute-0 python3[88414]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks
                                            rule:
                                              proto: udp
                                              dport: 4789
                                          - rule_name: 119 neutron geneve networks
                                            rule:
                                              proto: udp
                                              dport: 6081
                                              state: ["UNTRACKED"]
                                          - rule_name: 120 neutron geneve networks no conntrack
                                            rule:
                                              proto: udp
                                              dport: 6081
                                              table: raw
                                              chain: OUTPUT
                                              jump: NOTRACK
                                              action: append
                                              state: []
                                          - rule_name: 121 neutron geneve networks no conntrack
                                            rule:
                                              proto: udp
                                              dport: 6081
                                              table: raw
                                              chain: PREROUTING
                                              jump: NOTRACK
                                              action: append
                                              state: []
                                           dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Jan 23 11:29:39 compute-0 sudo[88412]: pam_unix(sudo:session): session closed for user root
Jan 23 11:29:40 compute-0 sudo[88564]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vckvymexiqreyrpeqtugkxvsbafkgkxt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167779.8631272-91-81780723360615/AnsiballZ_file.py'
Jan 23 11:29:40 compute-0 sudo[88564]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:29:40 compute-0 python3.9[88566]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:29:40 compute-0 sudo[88564]: pam_unix(sudo:session): session closed for user root
Jan 23 11:29:41 compute-0 sudo[88716]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zklvtcdzkdyjmkvoxybkvrvgsbadyevw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167780.5115166-99-153473560114604/AnsiballZ_stat.py'
Jan 23 11:29:41 compute-0 sudo[88716]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:29:41 compute-0 python3.9[88718]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:29:41 compute-0 sudo[88716]: pam_unix(sudo:session): session closed for user root
Jan 23 11:29:41 compute-0 sudo[88794]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-blqywhakaedanlfnllkvnmiixdnmcibr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167780.5115166-99-153473560114604/AnsiballZ_file.py'
Jan 23 11:29:41 compute-0 sudo[88794]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:29:41 compute-0 python3.9[88796]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:29:41 compute-0 sudo[88794]: pam_unix(sudo:session): session closed for user root
Jan 23 11:29:42 compute-0 sudo[88946]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-piobqpthbnneoimslffmlcobwfkausbs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167781.9564588-111-174722421362293/AnsiballZ_stat.py'
Jan 23 11:29:42 compute-0 sudo[88946]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:29:42 compute-0 python3.9[88948]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:29:42 compute-0 sudo[88946]: pam_unix(sudo:session): session closed for user root
Jan 23 11:29:42 compute-0 sudo[89024]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ciellptbnjluigwwayyzkuvakfikrcom ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167781.9564588-111-174722421362293/AnsiballZ_file.py'
Jan 23 11:29:42 compute-0 sudo[89024]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:29:43 compute-0 python3.9[89026]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.62xk7peo recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:29:43 compute-0 sudo[89024]: pam_unix(sudo:session): session closed for user root
Jan 23 11:29:43 compute-0 sudo[89176]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fmnstctkwjzdrmfopmjoxqgtcbaeehpz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167783.210631-123-161036286347355/AnsiballZ_stat.py'
Jan 23 11:29:43 compute-0 sudo[89176]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:29:43 compute-0 python3.9[89178]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:29:43 compute-0 sudo[89176]: pam_unix(sudo:session): session closed for user root
Jan 23 11:29:43 compute-0 sudo[89254]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hbhemtxwafwvtbpmqmbsiejoatpwkfme ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167783.210631-123-161036286347355/AnsiballZ_file.py'
Jan 23 11:29:43 compute-0 sudo[89254]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:29:44 compute-0 python3.9[89256]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:29:44 compute-0 sudo[89254]: pam_unix(sudo:session): session closed for user root
Jan 23 11:29:44 compute-0 sudo[89406]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kjlsfqkxwlwofdckymwvfrajnftuqhdr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167784.402651-136-72656495914849/AnsiballZ_command.py'
Jan 23 11:29:44 compute-0 sudo[89406]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:29:45 compute-0 python3.9[89408]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 11:29:45 compute-0 sudo[89406]: pam_unix(sudo:session): session closed for user root
Jan 23 11:29:46 compute-0 sudo[89559]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ffuezmxglafxlvvnmzpjybnofuglqrhs ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769167785.5517752-144-24921780976223/AnsiballZ_edpm_nftables_from_files.py'
Jan 23 11:29:46 compute-0 sudo[89559]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:29:46 compute-0 python3[89561]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 23 11:29:46 compute-0 sudo[89559]: pam_unix(sudo:session): session closed for user root
Jan 23 11:29:46 compute-0 sudo[89711]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wqcgutmestuobrztswycnfemudqlkbzr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167786.5437667-152-103300879936782/AnsiballZ_stat.py'
Jan 23 11:29:46 compute-0 sudo[89711]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:29:47 compute-0 python3.9[89713]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:29:47 compute-0 sudo[89711]: pam_unix(sudo:session): session closed for user root
Jan 23 11:29:47 compute-0 sudo[89836]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zfujfqswqdwwplqpjaarsqtljtptclha ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167786.5437667-152-103300879936782/AnsiballZ_copy.py'
Jan 23 11:29:47 compute-0 sudo[89836]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:29:47 compute-0 python3.9[89838]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769167786.5437667-152-103300879936782/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:29:47 compute-0 sudo[89836]: pam_unix(sudo:session): session closed for user root
Jan 23 11:29:48 compute-0 sudo[89988]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nylmyqkbpeuvkjcmpotcvaeglclhohoa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167788.0744238-167-125782885795711/AnsiballZ_stat.py'
Jan 23 11:29:48 compute-0 sudo[89988]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:29:48 compute-0 python3.9[89990]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:29:48 compute-0 sudo[89988]: pam_unix(sudo:session): session closed for user root
Jan 23 11:29:49 compute-0 sudo[90113]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wggwogymrqntdmbmgqntzqaamjzdetkl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167788.0744238-167-125782885795711/AnsiballZ_copy.py'
Jan 23 11:29:49 compute-0 sudo[90113]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:29:49 compute-0 python3.9[90115]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769167788.0744238-167-125782885795711/.source.nft follow=False _original_basename=jump-chain.j2 checksum=ac8dea350c18f51f54d48dacc09613cda4c5540c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:29:49 compute-0 sudo[90113]: pam_unix(sudo:session): session closed for user root
Jan 23 11:29:49 compute-0 sudo[90265]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-izsjdqncuobdddryrpofowjfcorhtctc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167789.398399-182-183929595306688/AnsiballZ_stat.py'
Jan 23 11:29:49 compute-0 sudo[90265]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:29:49 compute-0 python3.9[90267]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:29:49 compute-0 sudo[90265]: pam_unix(sudo:session): session closed for user root
Jan 23 11:29:50 compute-0 sudo[90390]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mzvacefqlqneraymfqrcvnvjgykldvzk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167789.398399-182-183929595306688/AnsiballZ_copy.py'
Jan 23 11:29:50 compute-0 sudo[90390]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:29:50 compute-0 python3.9[90392]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769167789.398399-182-183929595306688/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:29:50 compute-0 sudo[90390]: pam_unix(sudo:session): session closed for user root
Jan 23 11:29:50 compute-0 sudo[90542]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ecxpbesjvpsswbviybgicuzrcdeavvhm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167790.5442314-197-225165181724920/AnsiballZ_stat.py'
Jan 23 11:29:50 compute-0 sudo[90542]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:29:51 compute-0 python3.9[90544]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:29:51 compute-0 sudo[90542]: pam_unix(sudo:session): session closed for user root
Jan 23 11:29:51 compute-0 sudo[90667]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ykwswitmdktzrftrdzqegrqeeirprxrb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167790.5442314-197-225165181724920/AnsiballZ_copy.py'
Jan 23 11:29:51 compute-0 sudo[90667]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:29:51 compute-0 python3.9[90669]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769167790.5442314-197-225165181724920/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:29:51 compute-0 sudo[90667]: pam_unix(sudo:session): session closed for user root
Jan 23 11:29:52 compute-0 sudo[90819]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-npdefmausybmfljxdzlxteydsbjqhmez ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167791.9781303-212-107033308527339/AnsiballZ_stat.py'
Jan 23 11:29:52 compute-0 sudo[90819]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:29:52 compute-0 python3.9[90821]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:29:52 compute-0 sudo[90819]: pam_unix(sudo:session): session closed for user root
Jan 23 11:29:52 compute-0 sudo[90944]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-joujlfavtejkiuuvcusbfjlvamhkujcm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167791.9781303-212-107033308527339/AnsiballZ_copy.py'
Jan 23 11:29:52 compute-0 sudo[90944]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:29:53 compute-0 python3.9[90946]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769167791.9781303-212-107033308527339/.source.nft follow=False _original_basename=ruleset.j2 checksum=eb691bdb7d792c5f8ff0d719e807fe1c95b09438 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:29:53 compute-0 sudo[90944]: pam_unix(sudo:session): session closed for user root
Jan 23 11:29:53 compute-0 sudo[91096]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iaqufdvycanytfflxfqdldajairsehhg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167793.2787101-227-163762736887396/AnsiballZ_file.py'
Jan 23 11:29:53 compute-0 sudo[91096]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:29:53 compute-0 python3.9[91098]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:29:53 compute-0 sudo[91096]: pam_unix(sudo:session): session closed for user root
Jan 23 11:29:54 compute-0 sudo[91248]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-szbbnpwrduebxfajycezaemidhpjgsns ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167793.8407476-235-103737558694983/AnsiballZ_command.py'
Jan 23 11:29:54 compute-0 sudo[91248]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:29:54 compute-0 python3.9[91250]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 11:29:54 compute-0 sudo[91248]: pam_unix(sudo:session): session closed for user root
Jan 23 11:29:54 compute-0 sudo[91403]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cjobtoqmatvksppxnbzhbogailtwknhq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167794.515181-243-192903110166629/AnsiballZ_blockinfile.py'
Jan 23 11:29:54 compute-0 sudo[91403]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:29:55 compute-0 python3.9[91405]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                            include "/etc/nftables/edpm-chains.nft"
                                            include "/etc/nftables/edpm-rules.nft"
                                            include "/etc/nftables/edpm-jumps.nft"
                                             path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:29:55 compute-0 sudo[91403]: pam_unix(sudo:session): session closed for user root
Jan 23 11:29:55 compute-0 sudo[91555]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xulgrjqlkxrrpuecxcewwpqpxonhlsna ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167795.3875232-252-142940908712545/AnsiballZ_command.py'
Jan 23 11:29:55 compute-0 sudo[91555]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:29:56 compute-0 python3.9[91557]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 11:29:56 compute-0 sudo[91555]: pam_unix(sudo:session): session closed for user root
Jan 23 11:29:56 compute-0 sudo[91708]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lopcbrgxfgjbtgoayljizmnlquuekova ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167796.1950896-260-138010290943246/AnsiballZ_stat.py'
Jan 23 11:29:56 compute-0 sudo[91708]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:29:56 compute-0 python3.9[91710]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 11:29:56 compute-0 sudo[91708]: pam_unix(sudo:session): session closed for user root
Jan 23 11:29:57 compute-0 sudo[91862]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qeddxnjgvfgksskkkmnktgzicpfzadwj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167796.8138206-268-119726422572217/AnsiballZ_command.py'
Jan 23 11:29:57 compute-0 sudo[91862]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:29:57 compute-0 python3.9[91864]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 11:29:57 compute-0 sudo[91862]: pam_unix(sudo:session): session closed for user root
Jan 23 11:29:57 compute-0 sudo[92017]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-onrfmllocridnqwbiltzykfhgzlhrfkp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167797.473014-276-80554950269800/AnsiballZ_file.py'
Jan 23 11:29:57 compute-0 sudo[92017]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:29:58 compute-0 python3.9[92019]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:29:58 compute-0 sudo[92017]: pam_unix(sudo:session): session closed for user root
Jan 23 11:29:59 compute-0 python3.9[92169]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 11:29:59 compute-0 sudo[92320]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdpdvjjsrteycrgaxvrzefgqnqxvkkkz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167799.6795268-316-239020453715956/AnsiballZ_command.py'
Jan 23 11:29:59 compute-0 sudo[92320]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:30:00 compute-0 python3.9[92322]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:0e:0a:8d:1d:08:09" external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch 
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 11:30:00 compute-0 ovs-vsctl[92323]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:0e:0a:8d:1d:08:09 external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Jan 23 11:30:00 compute-0 sudo[92320]: pam_unix(sudo:session): session closed for user root
Jan 23 11:30:00 compute-0 sudo[92473]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ylzpewonqbgjvfumbiawptgzsgijjrkc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167800.3800204-325-176652258292338/AnsiballZ_command.py'
Jan 23 11:30:00 compute-0 sudo[92473]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:30:00 compute-0 python3.9[92475]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                            ovs-vsctl show | grep -q "Manager"
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 11:30:00 compute-0 sudo[92473]: pam_unix(sudo:session): session closed for user root
Jan 23 11:30:01 compute-0 sudo[92628]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ppyotkcdqzagspvgexmfpcqobpcbmkan ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167801.0077915-333-183269605838063/AnsiballZ_command.py'
Jan 23 11:30:01 compute-0 sudo[92628]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:30:01 compute-0 python3.9[92630]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 11:30:01 compute-0 ovs-vsctl[92631]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Jan 23 11:30:01 compute-0 sudo[92628]: pam_unix(sudo:session): session closed for user root
Jan 23 11:30:01 compute-0 python3.9[92781]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 11:30:02 compute-0 sudo[92933]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pnpjakfwtryppgepbvhmsnpmwbtpfwct ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167802.205821-350-185448449389520/AnsiballZ_file.py'
Jan 23 11:30:02 compute-0 sudo[92933]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:30:02 compute-0 python3.9[92935]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 23 11:30:02 compute-0 sudo[92933]: pam_unix(sudo:session): session closed for user root
Jan 23 11:30:03 compute-0 sudo[93085]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ctyyuuremqbmlbrthdwyqzgvshmjfgso ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167802.8027399-358-164922630161756/AnsiballZ_stat.py'
Jan 23 11:30:03 compute-0 sudo[93085]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:30:03 compute-0 python3.9[93087]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:30:03 compute-0 sudo[93085]: pam_unix(sudo:session): session closed for user root
Jan 23 11:30:03 compute-0 sudo[93163]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-slnsmcsvfkpkzwwuuukhsdzzddvuaxrm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167802.8027399-358-164922630161756/AnsiballZ_file.py'
Jan 23 11:30:03 compute-0 sudo[93163]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:30:03 compute-0 python3.9[93165]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 11:30:03 compute-0 sudo[93163]: pam_unix(sudo:session): session closed for user root
Jan 23 11:30:04 compute-0 sudo[93315]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nhvndkxvhjmrcglmehqwwehlaohoqzdp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167803.789002-358-6500909157397/AnsiballZ_stat.py'
Jan 23 11:30:04 compute-0 sudo[93315]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:30:04 compute-0 python3.9[93317]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:30:04 compute-0 sudo[93315]: pam_unix(sudo:session): session closed for user root
Jan 23 11:30:04 compute-0 sudo[93393]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-umuyijjclriftygijbkmuilthejiizpo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167803.789002-358-6500909157397/AnsiballZ_file.py'
Jan 23 11:30:04 compute-0 sudo[93393]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:30:04 compute-0 python3.9[93395]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 11:30:04 compute-0 sudo[93393]: pam_unix(sudo:session): session closed for user root
Jan 23 11:30:05 compute-0 sudo[93545]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mmevclbtjfkwuynywtedilendbpsbgtc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167804.8798141-381-146630672060389/AnsiballZ_file.py'
Jan 23 11:30:05 compute-0 sudo[93545]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:30:05 compute-0 python3.9[93547]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:30:05 compute-0 sudo[93545]: pam_unix(sudo:session): session closed for user root
Jan 23 11:30:05 compute-0 sudo[93697]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ipofzsepduoycpdhyliyadujiortfibo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167805.4788463-389-82855489373460/AnsiballZ_stat.py'
Jan 23 11:30:05 compute-0 sudo[93697]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:30:05 compute-0 python3.9[93699]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:30:05 compute-0 sudo[93697]: pam_unix(sudo:session): session closed for user root
Jan 23 11:30:06 compute-0 sudo[93775]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ezqdfobuxnfguefgnmbkzuhxdyjsfmfg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167805.4788463-389-82855489373460/AnsiballZ_file.py'
Jan 23 11:30:06 compute-0 sudo[93775]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:30:06 compute-0 python3.9[93777]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:30:06 compute-0 sudo[93775]: pam_unix(sudo:session): session closed for user root
Jan 23 11:30:06 compute-0 sudo[93927]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-opbepngjuembdxyfqyyylvzqqccpfcaw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167806.5009942-401-273689009956581/AnsiballZ_stat.py'
Jan 23 11:30:06 compute-0 sudo[93927]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:30:06 compute-0 python3.9[93929]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:30:06 compute-0 sudo[93927]: pam_unix(sudo:session): session closed for user root
Jan 23 11:30:07 compute-0 sudo[94005]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-thffzeaamzossxaunpwuzlqrgchboijj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167806.5009942-401-273689009956581/AnsiballZ_file.py'
Jan 23 11:30:07 compute-0 sudo[94005]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:30:07 compute-0 python3.9[94007]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:30:07 compute-0 sudo[94005]: pam_unix(sudo:session): session closed for user root
Jan 23 11:30:07 compute-0 sudo[94157]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nrbdhwmzphqwfietistdclozzyapuuuw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167807.5777571-413-90478494482953/AnsiballZ_systemd.py'
Jan 23 11:30:07 compute-0 sudo[94157]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:30:08 compute-0 python3.9[94159]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 11:30:08 compute-0 systemd[1]: Reloading.
Jan 23 11:30:08 compute-0 systemd-sysv-generator[94191]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 11:30:08 compute-0 systemd-rc-local-generator[94188]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 11:30:08 compute-0 sudo[94157]: pam_unix(sudo:session): session closed for user root
Jan 23 11:30:08 compute-0 sudo[94347]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ozbqtvveisthytxbujkwybkthbpplopo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167808.5375986-421-88857068697216/AnsiballZ_stat.py'
Jan 23 11:30:08 compute-0 sudo[94347]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:30:08 compute-0 python3.9[94349]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:30:09 compute-0 sudo[94347]: pam_unix(sudo:session): session closed for user root
Jan 23 11:30:09 compute-0 sudo[94425]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-arwnkrhduywzcxgrpbfpvyfymugcvfwj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167808.5375986-421-88857068697216/AnsiballZ_file.py'
Jan 23 11:30:09 compute-0 sudo[94425]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:30:09 compute-0 python3.9[94427]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:30:09 compute-0 sudo[94425]: pam_unix(sudo:session): session closed for user root
Jan 23 11:30:09 compute-0 sudo[94577]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nhphytfgmddxwyanakotrcxuxpykyloy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167809.6119847-433-236909773697999/AnsiballZ_stat.py'
Jan 23 11:30:09 compute-0 sudo[94577]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:30:10 compute-0 python3.9[94579]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:30:10 compute-0 sudo[94577]: pam_unix(sudo:session): session closed for user root
Jan 23 11:30:10 compute-0 sudo[94655]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-johcyehirfefouvqsyowfzfesmwuwxyk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167809.6119847-433-236909773697999/AnsiballZ_file.py'
Jan 23 11:30:10 compute-0 sudo[94655]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:30:10 compute-0 python3.9[94657]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:30:10 compute-0 sudo[94655]: pam_unix(sudo:session): session closed for user root
Jan 23 11:30:11 compute-0 sudo[94807]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-orjryrszrvkxuzhcwfuscmkwtjpuwche ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167810.8016696-445-143366326746855/AnsiballZ_systemd.py'
Jan 23 11:30:11 compute-0 sudo[94807]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:30:11 compute-0 python3.9[94809]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 11:30:11 compute-0 systemd[1]: Reloading.
Jan 23 11:30:11 compute-0 systemd-sysv-generator[94838]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 11:30:11 compute-0 systemd-rc-local-generator[94834]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 11:30:11 compute-0 systemd[1]: Starting Create netns directory...
Jan 23 11:30:11 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 23 11:30:11 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 23 11:30:11 compute-0 systemd[1]: Finished Create netns directory.
Jan 23 11:30:11 compute-0 sudo[94807]: pam_unix(sudo:session): session closed for user root
Jan 23 11:30:12 compute-0 sudo[94999]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ivlbhiqkvfqdmwlakywjbkgzbillfmee ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167811.945705-455-24020357487745/AnsiballZ_file.py'
Jan 23 11:30:12 compute-0 sudo[94999]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:30:12 compute-0 python3.9[95001]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 11:30:12 compute-0 sudo[94999]: pam_unix(sudo:session): session closed for user root
Jan 23 11:30:12 compute-0 sudo[95151]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cvsqpfgzveubqjcloaiydxnxkogxnljc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167812.6789408-463-57834168602418/AnsiballZ_stat.py'
Jan 23 11:30:12 compute-0 sudo[95151]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:30:13 compute-0 python3.9[95153]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:30:13 compute-0 sudo[95151]: pam_unix(sudo:session): session closed for user root
Jan 23 11:30:13 compute-0 sudo[95274]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uamovyxksoamroaxcgewbkbymiqxsqsl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167812.6789408-463-57834168602418/AnsiballZ_copy.py'
Jan 23 11:30:13 compute-0 sudo[95274]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:30:13 compute-0 python3.9[95276]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769167812.6789408-463-57834168602418/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 23 11:30:13 compute-0 sudo[95274]: pam_unix(sudo:session): session closed for user root
Jan 23 11:30:14 compute-0 sudo[95426]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ayrugzcrgmdelddnbjftbruhfnmptkuq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167814.1035976-480-236946269439136/AnsiballZ_file.py'
Jan 23 11:30:14 compute-0 sudo[95426]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:30:14 compute-0 python3.9[95428]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:30:14 compute-0 sudo[95426]: pam_unix(sudo:session): session closed for user root
Jan 23 11:30:15 compute-0 sudo[95578]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vniumdeoxvejficwakcciypbbwgprpuz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167814.8042822-488-38541196722316/AnsiballZ_file.py'
Jan 23 11:30:15 compute-0 sudo[95578]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:30:15 compute-0 python3.9[95580]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 23 11:30:15 compute-0 sudo[95578]: pam_unix(sudo:session): session closed for user root
Jan 23 11:30:15 compute-0 sudo[95730]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mngjuujjkydkwjkydgrpzfokagypccjp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167815.5761974-496-122561750730517/AnsiballZ_stat.py'
Jan 23 11:30:15 compute-0 sudo[95730]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:30:16 compute-0 python3.9[95732]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:30:16 compute-0 sudo[95730]: pam_unix(sudo:session): session closed for user root
Jan 23 11:30:16 compute-0 sudo[95853]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yjujxpplfzfdrfjhoeduizufgbygnsqm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167815.5761974-496-122561750730517/AnsiballZ_copy.py'
Jan 23 11:30:16 compute-0 sudo[95853]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:30:16 compute-0 python3.9[95855]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769167815.5761974-496-122561750730517/.source.json _original_basename=.mycglz48 follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:30:16 compute-0 sudo[95853]: pam_unix(sudo:session): session closed for user root
Jan 23 11:30:17 compute-0 python3.9[96005]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:30:19 compute-0 sudo[96426]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zcadgryogqiyoubqfveqhxxnsaywjsyn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167819.4118283-536-215711435523834/AnsiballZ_container_config_data.py'
Jan 23 11:30:19 compute-0 sudo[96426]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:30:20 compute-0 python3.9[96428]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Jan 23 11:30:20 compute-0 sudo[96426]: pam_unix(sudo:session): session closed for user root
Jan 23 11:30:22 compute-0 sudo[96578]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eytkdxgjpfkaxtfawggkanincppjpalw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167820.3315907-547-264262325883850/AnsiballZ_container_config_hash.py'
Jan 23 11:30:22 compute-0 sudo[96578]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:30:22 compute-0 python3.9[96580]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 23 11:30:22 compute-0 sudo[96578]: pam_unix(sudo:session): session closed for user root
Jan 23 11:30:23 compute-0 sudo[96730]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-inbycdbvodxpyjbeucihmkqeycpjezpz ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769167823.0239005-557-107021949192039/AnsiballZ_edpm_container_manage.py'
Jan 23 11:30:23 compute-0 sudo[96730]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:30:23 compute-0 python3[96732]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json containers=['ovn_controller'] log_base_path=/var/log/containers/stdouts debug=False
Jan 23 11:30:23 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 23 11:30:23 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 23 11:30:24 compute-0 podman[96769]: 2026-01-23 11:30:24.028450419 +0000 UTC m=+0.068762305 container create 1cc877fed4914980324cf4c0d6ba23743fd113442cee4d49cc1a59e402757170 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 11:30:24 compute-0 podman[96769]: 2026-01-23 11:30:23.984379142 +0000 UTC m=+0.024691058 image pull a17927617ef5a603f0594ee0d6df65aabdc9e0303ccc5a52c36f193de33ee0fe quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Jan 23 11:30:24 compute-0 python3[96732]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4 --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Jan 23 11:30:24 compute-0 sudo[96730]: pam_unix(sudo:session): session closed for user root
Jan 23 11:30:24 compute-0 sudo[96957]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nnjpcluampehsolcecssrfxvmuxpaaqq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167824.3169465-565-218187448040039/AnsiballZ_stat.py'
Jan 23 11:30:24 compute-0 sudo[96957]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:30:24 compute-0 python3.9[96959]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 11:30:24 compute-0 sudo[96957]: pam_unix(sudo:session): session closed for user root
Jan 23 11:30:24 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 23 11:30:25 compute-0 sudo[97113]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dsfvspdcmfmpavyzgqyqdowommqfleac ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167825.0857935-574-162691416668167/AnsiballZ_file.py'
Jan 23 11:30:25 compute-0 sudo[97113]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:30:25 compute-0 sshd-session[96986]: Invalid user solana from 193.32.162.146 port 58408
Jan 23 11:30:25 compute-0 sshd-session[96986]: Connection closed by invalid user solana 193.32.162.146 port 58408 [preauth]
Jan 23 11:30:25 compute-0 python3.9[97115]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:30:25 compute-0 sudo[97113]: pam_unix(sudo:session): session closed for user root
Jan 23 11:30:25 compute-0 sudo[97189]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kgxhcdklksqsrnsmmkxmcxkjiaktlgvj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167825.0857935-574-162691416668167/AnsiballZ_stat.py'
Jan 23 11:30:25 compute-0 sudo[97189]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:30:26 compute-0 python3.9[97191]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 11:30:26 compute-0 sudo[97189]: pam_unix(sudo:session): session closed for user root
Jan 23 11:30:26 compute-0 sudo[97340]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-igwidhxtuwykgcascqzboxlnqoebotel ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167826.0937462-574-89324176978553/AnsiballZ_copy.py'
Jan 23 11:30:26 compute-0 sudo[97340]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:30:26 compute-0 python3.9[97342]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769167826.0937462-574-89324176978553/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:30:26 compute-0 sudo[97340]: pam_unix(sudo:session): session closed for user root
Jan 23 11:30:26 compute-0 sudo[97416]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-laupzwydxbaeqyyltivikvfxeqgnfrew ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167826.0937462-574-89324176978553/AnsiballZ_systemd.py'
Jan 23 11:30:26 compute-0 sudo[97416]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:30:27 compute-0 python3.9[97418]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 23 11:30:27 compute-0 systemd[1]: Reloading.
Jan 23 11:30:27 compute-0 systemd-rc-local-generator[97445]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 11:30:27 compute-0 systemd-sysv-generator[97448]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 11:30:27 compute-0 sudo[97416]: pam_unix(sudo:session): session closed for user root
Jan 23 11:30:27 compute-0 sudo[97526]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hnrehtsoivtixdqhfxsxryxvxgrhqvbc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167826.0937462-574-89324176978553/AnsiballZ_systemd.py'
Jan 23 11:30:27 compute-0 sudo[97526]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:30:27 compute-0 python3.9[97528]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 11:30:28 compute-0 systemd[1]: Reloading.
Jan 23 11:30:28 compute-0 systemd-sysv-generator[97556]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 11:30:28 compute-0 systemd-rc-local-generator[97553]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 11:30:28 compute-0 systemd[1]: Starting ovn_controller container...
Jan 23 11:30:28 compute-0 systemd[1]: Created slice Virtual Machine and Container Slice.
Jan 23 11:30:28 compute-0 systemd[1]: Started libcrun container.
Jan 23 11:30:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0745d590cc395a17dd4e8735c51ac93f3911492c4fdacb2859738fcf08a730c/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Jan 23 11:30:28 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 1cc877fed4914980324cf4c0d6ba23743fd113442cee4d49cc1a59e402757170.
Jan 23 11:30:28 compute-0 podman[97568]: 2026-01-23 11:30:28.4361464 +0000 UTC m=+0.134921227 container init 1cc877fed4914980324cf4c0d6ba23743fd113442cee4d49cc1a59e402757170 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 23 11:30:28 compute-0 ovn_controller[97581]: + sudo -E kolla_set_configs
Jan 23 11:30:28 compute-0 podman[97568]: 2026-01-23 11:30:28.458164704 +0000 UTC m=+0.156939511 container start 1cc877fed4914980324cf4c0d6ba23743fd113442cee4d49cc1a59e402757170 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true)
Jan 23 11:30:28 compute-0 edpm-start-podman-container[97568]: ovn_controller
Jan 23 11:30:28 compute-0 systemd[1]: Created slice User Slice of UID 0.
Jan 23 11:30:28 compute-0 systemd[1]: Starting User Runtime Directory /run/user/0...
Jan 23 11:30:28 compute-0 systemd[1]: Finished User Runtime Directory /run/user/0.
Jan 23 11:30:28 compute-0 edpm-start-podman-container[97567]: Creating additional drop-in dependency for "ovn_controller" (1cc877fed4914980324cf4c0d6ba23743fd113442cee4d49cc1a59e402757170)
Jan 23 11:30:28 compute-0 systemd[1]: Starting User Manager for UID 0...
Jan 23 11:30:28 compute-0 systemd[97616]: pam_unix(systemd-user:session): session opened for user root(uid=0) by root(uid=0)
Jan 23 11:30:28 compute-0 systemd[1]: Reloading.
Jan 23 11:30:28 compute-0 podman[97588]: 2026-01-23 11:30:28.545123787 +0000 UTC m=+0.067555562 container health_status 1cc877fed4914980324cf4c0d6ba23743fd113442cee4d49cc1a59e402757170 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20251202)
Jan 23 11:30:28 compute-0 systemd-rc-local-generator[97665]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 11:30:28 compute-0 systemd-sysv-generator[97668]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 11:30:28 compute-0 systemd[97616]: Queued start job for default target Main User Target.
Jan 23 11:30:28 compute-0 systemd[97616]: Created slice User Application Slice.
Jan 23 11:30:28 compute-0 systemd[97616]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Jan 23 11:30:28 compute-0 systemd[97616]: Started Daily Cleanup of User's Temporary Directories.
Jan 23 11:30:28 compute-0 systemd[97616]: Reached target Paths.
Jan 23 11:30:28 compute-0 systemd[97616]: Reached target Timers.
Jan 23 11:30:28 compute-0 systemd[97616]: Starting D-Bus User Message Bus Socket...
Jan 23 11:30:28 compute-0 systemd[97616]: Starting Create User's Volatile Files and Directories...
Jan 23 11:30:28 compute-0 systemd[97616]: Listening on D-Bus User Message Bus Socket.
Jan 23 11:30:28 compute-0 systemd[97616]: Reached target Sockets.
Jan 23 11:30:28 compute-0 systemd[97616]: Finished Create User's Volatile Files and Directories.
Jan 23 11:30:28 compute-0 systemd[97616]: Reached target Basic System.
Jan 23 11:30:28 compute-0 systemd[97616]: Reached target Main User Target.
Jan 23 11:30:28 compute-0 systemd[97616]: Startup finished in 124ms.
Jan 23 11:30:28 compute-0 systemd[1]: Started User Manager for UID 0.
Jan 23 11:30:28 compute-0 systemd[1]: Started ovn_controller container.
Jan 23 11:30:28 compute-0 systemd[1]: 1cc877fed4914980324cf4c0d6ba23743fd113442cee4d49cc1a59e402757170-6aaaece58fc01608.service: Main process exited, code=exited, status=1/FAILURE
Jan 23 11:30:28 compute-0 systemd[1]: 1cc877fed4914980324cf4c0d6ba23743fd113442cee4d49cc1a59e402757170-6aaaece58fc01608.service: Failed with result 'exit-code'.
Jan 23 11:30:28 compute-0 systemd[1]: Started Session c1 of User root.
Jan 23 11:30:28 compute-0 sudo[97526]: pam_unix(sudo:session): session closed for user root
Jan 23 11:30:28 compute-0 ovn_controller[97581]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 23 11:30:28 compute-0 ovn_controller[97581]: INFO:__main__:Validating config file
Jan 23 11:30:28 compute-0 ovn_controller[97581]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 23 11:30:28 compute-0 ovn_controller[97581]: INFO:__main__:Writing out command to execute
Jan 23 11:30:28 compute-0 ovn_controller[97581]: ++ cat /run_command
Jan 23 11:30:28 compute-0 systemd[1]: session-c1.scope: Deactivated successfully.
Jan 23 11:30:28 compute-0 ovn_controller[97581]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Jan 23 11:30:28 compute-0 ovn_controller[97581]: + ARGS=
Jan 23 11:30:28 compute-0 ovn_controller[97581]: + sudo kolla_copy_cacerts
Jan 23 11:30:28 compute-0 systemd[1]: Started Session c2 of User root.
Jan 23 11:30:28 compute-0 systemd[1]: session-c2.scope: Deactivated successfully.
Jan 23 11:30:28 compute-0 ovn_controller[97581]: + [[ ! -n '' ]]
Jan 23 11:30:28 compute-0 ovn_controller[97581]: + . kolla_extend_start
Jan 23 11:30:28 compute-0 ovn_controller[97581]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Jan 23 11:30:28 compute-0 ovn_controller[97581]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Jan 23 11:30:28 compute-0 ovn_controller[97581]: + umask 0022
Jan 23 11:30:28 compute-0 ovn_controller[97581]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Jan 23 11:30:28 compute-0 ovn_controller[97581]: 2026-01-23T11:30:28Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Jan 23 11:30:28 compute-0 ovn_controller[97581]: 2026-01-23T11:30:28Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Jan 23 11:30:28 compute-0 ovn_controller[97581]: 2026-01-23T11:30:28Z|00003|main|INFO|OVN internal version is : [24.03.8-20.33.0-76.8]
Jan 23 11:30:28 compute-0 ovn_controller[97581]: 2026-01-23T11:30:28Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Jan 23 11:30:28 compute-0 ovn_controller[97581]: 2026-01-23T11:30:28Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Jan 23 11:30:28 compute-0 ovn_controller[97581]: 2026-01-23T11:30:28Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Jan 23 11:30:28 compute-0 NetworkManager[56133]: <info>  [1769167828.9297] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Jan 23 11:30:28 compute-0 NetworkManager[56133]: <info>  [1769167828.9305] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 23 11:30:28 compute-0 NetworkManager[56133]: <warn>  [1769167828.9307] device (br-int)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 23 11:30:28 compute-0 NetworkManager[56133]: <info>  [1769167828.9314] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/15)
Jan 23 11:30:28 compute-0 NetworkManager[56133]: <info>  [1769167828.9320] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/16)
Jan 23 11:30:28 compute-0 NetworkManager[56133]: <info>  [1769167828.9323] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Jan 23 11:30:28 compute-0 kernel: br-int: entered promiscuous mode
Jan 23 11:30:28 compute-0 ovn_controller[97581]: 2026-01-23T11:30:28Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Jan 23 11:30:28 compute-0 ovn_controller[97581]: 2026-01-23T11:30:28Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 23 11:30:28 compute-0 ovn_controller[97581]: 2026-01-23T11:30:28Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 23 11:30:28 compute-0 ovn_controller[97581]: 2026-01-23T11:30:28Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Jan 23 11:30:28 compute-0 ovn_controller[97581]: 2026-01-23T11:30:28Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Jan 23 11:30:28 compute-0 ovn_controller[97581]: 2026-01-23T11:30:28Z|00012|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Jan 23 11:30:28 compute-0 ovn_controller[97581]: 2026-01-23T11:30:28Z|00013|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Jan 23 11:30:28 compute-0 ovn_controller[97581]: 2026-01-23T11:30:28Z|00014|main|INFO|OVS feature set changed, force recompute.
Jan 23 11:30:28 compute-0 ovn_controller[97581]: 2026-01-23T11:30:28Z|00015|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 23 11:30:28 compute-0 ovn_controller[97581]: 2026-01-23T11:30:28Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 23 11:30:28 compute-0 ovn_controller[97581]: 2026-01-23T11:30:28Z|00017|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 23 11:30:28 compute-0 ovn_controller[97581]: 2026-01-23T11:30:28Z|00018|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Jan 23 11:30:28 compute-0 ovn_controller[97581]: 2026-01-23T11:30:28Z|00019|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Jan 23 11:30:28 compute-0 ovn_controller[97581]: 2026-01-23T11:30:28Z|00020|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 23 11:30:28 compute-0 ovn_controller[97581]: 2026-01-23T11:30:28Z|00021|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Jan 23 11:30:28 compute-0 ovn_controller[97581]: 2026-01-23T11:30:28Z|00022|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Jan 23 11:30:28 compute-0 ovn_controller[97581]: 2026-01-23T11:30:28Z|00023|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Jan 23 11:30:28 compute-0 ovn_controller[97581]: 2026-01-23T11:30:28Z|00024|main|INFO|OVS feature set changed, force recompute.
Jan 23 11:30:28 compute-0 systemd-udevd[97712]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 11:30:29 compute-0 ovn_controller[97581]: 2026-01-23T11:30:29Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 23 11:30:29 compute-0 ovn_controller[97581]: 2026-01-23T11:30:29Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 23 11:30:29 compute-0 ovn_controller[97581]: 2026-01-23T11:30:29Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 23 11:30:29 compute-0 ovn_controller[97581]: 2026-01-23T11:30:29Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 23 11:30:29 compute-0 ovn_controller[97581]: 2026-01-23T11:30:29Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 23 11:30:29 compute-0 ovn_controller[97581]: 2026-01-23T11:30:29Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 23 11:30:29 compute-0 NetworkManager[56133]: <info>  [1769167829.2501] manager: (ovn-d0377f-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Jan 23 11:30:29 compute-0 systemd-udevd[97717]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 11:30:29 compute-0 kernel: genev_sys_6081: entered promiscuous mode
Jan 23 11:30:29 compute-0 NetworkManager[56133]: <info>  [1769167829.2838] device (genev_sys_6081): carrier: link connected
Jan 23 11:30:29 compute-0 NetworkManager[56133]: <info>  [1769167829.2844] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/18)
Jan 23 11:30:29 compute-0 python3.9[97842]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 23 11:30:30 compute-0 sudo[97992]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jwvxbuctgujjinlkjqhvowoxplrijvvn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167830.035549-619-107649552134498/AnsiballZ_stat.py'
Jan 23 11:30:30 compute-0 sudo[97992]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:30:30 compute-0 python3.9[97994]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:30:30 compute-0 sudo[97992]: pam_unix(sudo:session): session closed for user root
Jan 23 11:30:30 compute-0 sudo[98115]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-clnbwhfczfgnonajicswnqzvkxedhdcn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167830.035549-619-107649552134498/AnsiballZ_copy.py'
Jan 23 11:30:30 compute-0 sudo[98115]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:30:30 compute-0 python3.9[98117]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769167830.035549-619-107649552134498/.source.yaml _original_basename=.8f1sot_5 follow=False checksum=661e5827f5dda52fe418c66f5a8341f3047dd9f8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:30:30 compute-0 sudo[98115]: pam_unix(sudo:session): session closed for user root
Jan 23 11:30:31 compute-0 sudo[98267]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-twbbktelzbmsdvjuokvbpjgveetyxcci ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167831.1584733-634-271251960269921/AnsiballZ_command.py'
Jan 23 11:30:31 compute-0 sudo[98267]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:30:31 compute-0 python3.9[98269]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 11:30:31 compute-0 ovs-vsctl[98270]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Jan 23 11:30:31 compute-0 sudo[98267]: pam_unix(sudo:session): session closed for user root
Jan 23 11:30:32 compute-0 sudo[98420]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmyivkvqeoifypywtihyusdoxupveptm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167831.8935883-642-202501873498559/AnsiballZ_command.py'
Jan 23 11:30:32 compute-0 sudo[98420]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:30:32 compute-0 python3.9[98422]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 11:30:32 compute-0 ovs-vsctl[98424]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Jan 23 11:30:32 compute-0 sudo[98420]: pam_unix(sudo:session): session closed for user root
Jan 23 11:30:32 compute-0 sudo[98575]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wobmnlpszzbkosfzfvreicvjgnylqbfn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167832.660886-656-203058162387798/AnsiballZ_command.py'
Jan 23 11:30:32 compute-0 sudo[98575]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:30:33 compute-0 python3.9[98577]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 11:30:33 compute-0 ovs-vsctl[98578]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Jan 23 11:30:33 compute-0 sudo[98575]: pam_unix(sudo:session): session closed for user root
Jan 23 11:30:33 compute-0 sshd-session[87106]: Connection closed by 192.168.122.30 port 40868
Jan 23 11:30:33 compute-0 sshd-session[87103]: pam_unix(sshd:session): session closed for user zuul
Jan 23 11:30:33 compute-0 systemd[1]: session-20.scope: Deactivated successfully.
Jan 23 11:30:33 compute-0 systemd[1]: session-20.scope: Consumed 47.047s CPU time.
Jan 23 11:30:33 compute-0 systemd-logind[798]: Session 20 logged out. Waiting for processes to exit.
Jan 23 11:30:33 compute-0 systemd-logind[798]: Removed session 20.
Jan 23 11:30:39 compute-0 systemd[1]: Stopping User Manager for UID 0...
Jan 23 11:30:39 compute-0 systemd[97616]: Activating special unit Exit the Session...
Jan 23 11:30:39 compute-0 systemd[97616]: Stopped target Main User Target.
Jan 23 11:30:39 compute-0 systemd[97616]: Stopped target Basic System.
Jan 23 11:30:39 compute-0 systemd[97616]: Stopped target Paths.
Jan 23 11:30:39 compute-0 systemd[97616]: Stopped target Sockets.
Jan 23 11:30:39 compute-0 systemd[97616]: Stopped target Timers.
Jan 23 11:30:39 compute-0 systemd[97616]: Stopped Daily Cleanup of User's Temporary Directories.
Jan 23 11:30:39 compute-0 systemd[97616]: Closed D-Bus User Message Bus Socket.
Jan 23 11:30:39 compute-0 systemd[97616]: Stopped Create User's Volatile Files and Directories.
Jan 23 11:30:39 compute-0 systemd[97616]: Removed slice User Application Slice.
Jan 23 11:30:39 compute-0 systemd[97616]: Reached target Shutdown.
Jan 23 11:30:39 compute-0 systemd[97616]: Finished Exit the Session.
Jan 23 11:30:39 compute-0 systemd[97616]: Reached target Exit the Session.
Jan 23 11:30:39 compute-0 systemd[1]: user@0.service: Deactivated successfully.
Jan 23 11:30:39 compute-0 systemd[1]: Stopped User Manager for UID 0.
Jan 23 11:30:39 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/0...
Jan 23 11:30:39 compute-0 systemd[1]: run-user-0.mount: Deactivated successfully.
Jan 23 11:30:39 compute-0 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Jan 23 11:30:39 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/0.
Jan 23 11:30:39 compute-0 systemd[1]: Removed slice User Slice of UID 0.
Jan 23 11:30:44 compute-0 sshd-session[98609]: Accepted publickey for zuul from 192.168.122.30 port 39750 ssh2: ECDSA SHA256:AUEDGm/wgPOySUg5KweIs4KJvJDZMkuE7T7y2BxO92Y
Jan 23 11:30:44 compute-0 systemd-logind[798]: New session 22 of user zuul.
Jan 23 11:30:44 compute-0 systemd[1]: Started Session 22 of User zuul.
Jan 23 11:30:44 compute-0 sshd-session[98609]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 23 11:30:46 compute-0 python3.9[98762]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 11:30:46 compute-0 sudo[98916]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bxtjukcdhlzcscyctoxqvdbwszwanhgc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167846.535394-29-2323609647233/AnsiballZ_file.py'
Jan 23 11:30:46 compute-0 sudo[98916]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:30:47 compute-0 python3.9[98918]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/openstack/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 23 11:30:47 compute-0 sudo[98916]: pam_unix(sudo:session): session closed for user root
Jan 23 11:30:47 compute-0 sudo[99069]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bzaxwvritfvgasnbotckgyesdryszujl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167847.2696104-29-102644975379550/AnsiballZ_file.py'
Jan 23 11:30:47 compute-0 sudo[99069]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:30:47 compute-0 python3.9[99071]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 11:30:47 compute-0 sudo[99069]: pam_unix(sudo:session): session closed for user root
Jan 23 11:30:48 compute-0 sudo[99221]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zmgvaqxksgynozocotilbymjfvlxefjd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167847.858306-29-17535683620656/AnsiballZ_file.py'
Jan 23 11:30:48 compute-0 sudo[99221]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:30:48 compute-0 python3.9[99223]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 11:30:48 compute-0 sudo[99221]: pam_unix(sudo:session): session closed for user root
Jan 23 11:30:48 compute-0 sudo[99373]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bsybynyqzjtsreybmbyrfhvgucqpfzbq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167848.62355-29-275203821216535/AnsiballZ_file.py'
Jan 23 11:30:48 compute-0 sudo[99373]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:30:49 compute-0 python3.9[99375]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 11:30:49 compute-0 sudo[99373]: pam_unix(sudo:session): session closed for user root
Jan 23 11:30:49 compute-0 sudo[99525]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-urpjlcfdehyvchlvevqlaypxsrifbtcz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167849.381496-29-250781640283121/AnsiballZ_file.py'
Jan 23 11:30:49 compute-0 sudo[99525]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:30:49 compute-0 python3.9[99527]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 11:30:49 compute-0 sudo[99525]: pam_unix(sudo:session): session closed for user root
Jan 23 11:30:50 compute-0 python3.9[99677]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 11:30:51 compute-0 sudo[99827]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bxxrahobjzvrhrkfdzrfzuovyyvpykfq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167850.7729537-73-258075929681937/AnsiballZ_seboolean.py'
Jan 23 11:30:51 compute-0 sudo[99827]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:30:51 compute-0 python3.9[99829]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Jan 23 11:30:52 compute-0 sudo[99827]: pam_unix(sudo:session): session closed for user root
Jan 23 11:30:52 compute-0 python3.9[99979]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:30:53 compute-0 python3.9[100100]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769167852.1660323-81-229630510528476/.source follow=False _original_basename=haproxy.j2 checksum=a5072e7b19ca96a1f495d94f97f31903737cfd27 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 23 11:30:53 compute-0 python3.9[100250]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:30:54 compute-0 python3.9[100371]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769167853.4644413-96-149686388559305/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 23 11:30:54 compute-0 sudo[100521]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idmovvicbmtqhyrppvsaaxddpuxovmbj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167854.6495347-113-137831177694008/AnsiballZ_setup.py'
Jan 23 11:30:54 compute-0 sudo[100521]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:30:55 compute-0 python3.9[100523]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 23 11:30:55 compute-0 sudo[100521]: pam_unix(sudo:session): session closed for user root
Jan 23 11:30:55 compute-0 sudo[100605]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rsrwelscltnafnpjllmforkuwdhhrtdc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167854.6495347-113-137831177694008/AnsiballZ_dnf.py'
Jan 23 11:30:55 compute-0 sudo[100605]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:30:56 compute-0 python3.9[100607]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 23 11:30:57 compute-0 sudo[100605]: pam_unix(sudo:session): session closed for user root
Jan 23 11:30:58 compute-0 sudo[100758]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sogjphvnrhtqheivgiyekzbnbktjxmje ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167857.4432395-125-173059875193921/AnsiballZ_systemd.py'
Jan 23 11:30:58 compute-0 sudo[100758]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:30:58 compute-0 python3.9[100760]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 23 11:30:58 compute-0 sudo[100758]: pam_unix(sudo:session): session closed for user root
Jan 23 11:30:58 compute-0 ovn_controller[97581]: 2026-01-23T11:30:58Z|00025|memory|INFO|16256 kB peak resident set size after 30.0 seconds
Jan 23 11:30:58 compute-0 ovn_controller[97581]: 2026-01-23T11:30:58Z|00026|memory|INFO|idl-cells-OVN_Southbound:239 idl-cells-Open_vSwitch:471 ofctrl_desired_flow_usage-KB:5 ofctrl_installed_flow_usage-KB:4 ofctrl_sb_flow_ref_usage-KB:2
Jan 23 11:30:59 compute-0 podman[100887]: 2026-01-23 11:30:59.004177732 +0000 UTC m=+0.136245924 container health_status 1cc877fed4914980324cf4c0d6ba23743fd113442cee4d49cc1a59e402757170 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_id=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 23 11:30:59 compute-0 python3.9[100925]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:30:59 compute-0 python3.9[101060]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769167858.6521106-133-44515369219347/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 23 11:31:00 compute-0 python3.9[101210]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:31:00 compute-0 python3.9[101331]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769167859.7386222-133-174155474004191/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 23 11:31:01 compute-0 python3.9[101481]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:31:02 compute-0 python3.9[101602]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769167861.3603203-177-133770969490856/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 23 11:31:02 compute-0 python3.9[101752]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:31:03 compute-0 python3.9[101873]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769167862.3723335-177-34924920737686/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 23 11:31:03 compute-0 python3.9[102023]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 11:31:04 compute-0 sudo[102175]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-flogbviyunmbxvrnljjewmnpkllrzdau ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167864.2412403-215-140936083643686/AnsiballZ_file.py'
Jan 23 11:31:04 compute-0 sudo[102175]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:31:04 compute-0 python3.9[102177]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 23 11:31:04 compute-0 sudo[102175]: pam_unix(sudo:session): session closed for user root
Jan 23 11:31:05 compute-0 sudo[102327]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jczznyjpfzlycuyimxmdhxaldszpguqe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167864.8963137-223-53465973644416/AnsiballZ_stat.py'
Jan 23 11:31:05 compute-0 sudo[102327]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:31:05 compute-0 python3.9[102329]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:31:05 compute-0 sudo[102327]: pam_unix(sudo:session): session closed for user root
Jan 23 11:31:05 compute-0 sudo[102405]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bwmgpungqbuekgakbtgxxaitupehhzhn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167864.8963137-223-53465973644416/AnsiballZ_file.py'
Jan 23 11:31:05 compute-0 sudo[102405]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:31:05 compute-0 python3.9[102407]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 11:31:05 compute-0 sudo[102405]: pam_unix(sudo:session): session closed for user root
Jan 23 11:31:06 compute-0 sudo[102557]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fwbgkukaoxakxrfakruznlnxrpkbpisw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167865.9292932-223-192628141146257/AnsiballZ_stat.py'
Jan 23 11:31:06 compute-0 sudo[102557]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:31:06 compute-0 python3.9[102559]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:31:06 compute-0 sudo[102557]: pam_unix(sudo:session): session closed for user root
Jan 23 11:31:06 compute-0 sudo[102635]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yiravnjluasmgqbwnaigtrmiatlheahn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167865.9292932-223-192628141146257/AnsiballZ_file.py'
Jan 23 11:31:06 compute-0 sudo[102635]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:31:06 compute-0 python3.9[102637]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 11:31:06 compute-0 sudo[102635]: pam_unix(sudo:session): session closed for user root
Jan 23 11:31:07 compute-0 sudo[102787]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xkqyjihuorszxdbuuwnbvodilrtuyfpe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167866.9792182-246-180846019227347/AnsiballZ_file.py'
Jan 23 11:31:07 compute-0 sudo[102787]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:31:07 compute-0 python3.9[102789]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:31:07 compute-0 sudo[102787]: pam_unix(sudo:session): session closed for user root
Jan 23 11:31:07 compute-0 sudo[102939]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-furaucwbwtgcnkfagkyeqnedohcfqtdu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167867.6709647-254-43301331970994/AnsiballZ_stat.py'
Jan 23 11:31:07 compute-0 sudo[102939]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:31:08 compute-0 python3.9[102941]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:31:08 compute-0 sudo[102939]: pam_unix(sudo:session): session closed for user root
Jan 23 11:31:08 compute-0 sudo[103017]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dzlnbmvteahkmabaghvjyprqiofxdken ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167867.6709647-254-43301331970994/AnsiballZ_file.py'
Jan 23 11:31:08 compute-0 sudo[103017]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:31:08 compute-0 python3.9[103019]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:31:08 compute-0 sudo[103017]: pam_unix(sudo:session): session closed for user root
Jan 23 11:31:08 compute-0 sudo[103169]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gkpzqkblhojnuvpowiduvfjtzqnhtcit ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167868.640683-266-931214790446/AnsiballZ_stat.py'
Jan 23 11:31:08 compute-0 sudo[103169]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:31:09 compute-0 python3.9[103171]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:31:09 compute-0 sudo[103169]: pam_unix(sudo:session): session closed for user root
Jan 23 11:31:09 compute-0 sudo[103247]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nyzpcbhumgdwgbkfsorzcdnqerolyoli ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167868.640683-266-931214790446/AnsiballZ_file.py'
Jan 23 11:31:09 compute-0 sudo[103247]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:31:09 compute-0 python3.9[103249]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:31:09 compute-0 sudo[103247]: pam_unix(sudo:session): session closed for user root
Jan 23 11:31:10 compute-0 sudo[103399]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kfmetvbpfzqgeaaillqajolgqevlacuc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167869.7077236-278-153751833977015/AnsiballZ_systemd.py'
Jan 23 11:31:10 compute-0 sudo[103399]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:31:10 compute-0 python3.9[103401]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 11:31:10 compute-0 systemd[1]: Reloading.
Jan 23 11:31:10 compute-0 systemd-rc-local-generator[103425]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 11:31:10 compute-0 systemd-sysv-generator[103428]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 11:31:10 compute-0 sudo[103399]: pam_unix(sudo:session): session closed for user root
Jan 23 11:31:10 compute-0 sudo[103589]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sbruftquifjqflumtwlatqucyulliwyo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167870.7009604-286-143601549778787/AnsiballZ_stat.py'
Jan 23 11:31:10 compute-0 sudo[103589]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:31:11 compute-0 python3.9[103591]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:31:11 compute-0 sudo[103589]: pam_unix(sudo:session): session closed for user root
Jan 23 11:31:11 compute-0 sudo[103667]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-numkciacjfgcvgzjjqwwhstilwsrtwtu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167870.7009604-286-143601549778787/AnsiballZ_file.py'
Jan 23 11:31:11 compute-0 sudo[103667]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:31:11 compute-0 python3.9[103669]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:31:11 compute-0 sudo[103667]: pam_unix(sudo:session): session closed for user root
Jan 23 11:31:11 compute-0 sudo[103819]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qpswnkivpkmeolaljbqirzbjhueqpmck ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167871.6696663-298-184806986022658/AnsiballZ_stat.py'
Jan 23 11:31:11 compute-0 sudo[103819]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:31:12 compute-0 python3.9[103821]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:31:12 compute-0 sudo[103819]: pam_unix(sudo:session): session closed for user root
Jan 23 11:31:12 compute-0 sudo[103897]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oqszlnyptkpwrnxeotxzkzndojmjvgsf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167871.6696663-298-184806986022658/AnsiballZ_file.py'
Jan 23 11:31:12 compute-0 sudo[103897]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:31:12 compute-0 python3.9[103899]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:31:12 compute-0 sudo[103897]: pam_unix(sudo:session): session closed for user root
Jan 23 11:31:12 compute-0 sudo[104049]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hrcsugysgtonuqjputpzernwspkngvcm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167872.611278-310-117982119656364/AnsiballZ_systemd.py'
Jan 23 11:31:12 compute-0 sudo[104049]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:31:13 compute-0 python3.9[104051]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 11:31:13 compute-0 systemd[1]: Reloading.
Jan 23 11:31:13 compute-0 systemd-sysv-generator[104082]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 11:31:13 compute-0 systemd-rc-local-generator[104078]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 11:31:13 compute-0 systemd[1]: Starting Create netns directory...
Jan 23 11:31:13 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 23 11:31:13 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 23 11:31:13 compute-0 systemd[1]: Finished Create netns directory.
Jan 23 11:31:13 compute-0 sudo[104049]: pam_unix(sudo:session): session closed for user root
Jan 23 11:31:14 compute-0 sudo[104243]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gkekxhdugchuypxpznrkmxjtsebwgxzy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167873.756246-320-26257941704809/AnsiballZ_file.py'
Jan 23 11:31:14 compute-0 sudo[104243]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:31:14 compute-0 python3.9[104245]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 11:31:14 compute-0 sudo[104243]: pam_unix(sudo:session): session closed for user root
Jan 23 11:31:14 compute-0 sudo[104395]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qxkvrfelgbbeolwytwsczmzesrwfgqha ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167874.3919227-328-26655491201294/AnsiballZ_stat.py'
Jan 23 11:31:14 compute-0 sudo[104395]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:31:14 compute-0 python3.9[104397]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:31:14 compute-0 sudo[104395]: pam_unix(sudo:session): session closed for user root
Jan 23 11:31:15 compute-0 sudo[104518]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-omokkfgbeyrrwdoxlachdfrwkwnwsdpj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167874.3919227-328-26655491201294/AnsiballZ_copy.py'
Jan 23 11:31:15 compute-0 sudo[104518]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:31:15 compute-0 python3.9[104520]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769167874.3919227-328-26655491201294/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 23 11:31:15 compute-0 sudo[104518]: pam_unix(sudo:session): session closed for user root
Jan 23 11:31:15 compute-0 sudo[104670]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nqzjmkdmwrqwcewflbufiycgtpctjarb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167875.72626-345-273654897726558/AnsiballZ_file.py'
Jan 23 11:31:15 compute-0 sudo[104670]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:31:16 compute-0 python3.9[104672]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:31:16 compute-0 sudo[104670]: pam_unix(sudo:session): session closed for user root
Jan 23 11:31:16 compute-0 sudo[104822]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-llmynlwzaawgguwxzofkcanpwhwfprwf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167876.3433008-353-153515350607295/AnsiballZ_file.py'
Jan 23 11:31:16 compute-0 sudo[104822]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:31:16 compute-0 python3.9[104824]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 23 11:31:16 compute-0 sudo[104822]: pam_unix(sudo:session): session closed for user root
Jan 23 11:31:17 compute-0 sudo[104974]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zadldijbvtcmmtmpfnredildqccfgoqg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167877.010661-361-245822448029216/AnsiballZ_stat.py'
Jan 23 11:31:17 compute-0 sudo[104974]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:31:17 compute-0 python3.9[104976]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:31:17 compute-0 sudo[104974]: pam_unix(sudo:session): session closed for user root
Jan 23 11:31:17 compute-0 sudo[105097]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djqdzooxvrlqzvsydsarrjxluhnwimmj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167877.010661-361-245822448029216/AnsiballZ_copy.py'
Jan 23 11:31:17 compute-0 sudo[105097]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:31:17 compute-0 python3.9[105099]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769167877.010661-361-245822448029216/.source.json _original_basename=.4f__jpem follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:31:17 compute-0 sudo[105097]: pam_unix(sudo:session): session closed for user root
Jan 23 11:31:18 compute-0 python3.9[105249]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:31:20 compute-0 sudo[105670]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pdgcirwhsppnrnblxhghtukkuxscjakd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167880.0638154-401-180565900506160/AnsiballZ_container_config_data.py'
Jan 23 11:31:20 compute-0 sudo[105670]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:31:20 compute-0 python3.9[105672]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Jan 23 11:31:20 compute-0 sudo[105670]: pam_unix(sudo:session): session closed for user root
Jan 23 11:31:21 compute-0 sudo[105822]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ufaqfrbufdwhjpvvtzdbrmnovdeablrz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167880.9497888-412-231809778686238/AnsiballZ_container_config_hash.py'
Jan 23 11:31:21 compute-0 sudo[105822]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:31:21 compute-0 python3.9[105824]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 23 11:31:21 compute-0 sudo[105822]: pam_unix(sudo:session): session closed for user root
Jan 23 11:31:22 compute-0 sudo[105974]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pyvcolfgelspggwwyklrcwkmemipqurh ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769167881.8568077-422-164007032342076/AnsiballZ_edpm_container_manage.py'
Jan 23 11:31:22 compute-0 sudo[105974]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:31:22 compute-0 python3[105976]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json containers=['ovn_metadata_agent'] log_base_path=/var/log/containers/stdouts debug=False
Jan 23 11:31:22 compute-0 podman[106013]: 2026-01-23 11:31:22.842486692 +0000 UTC m=+0.059015752 container create d96827cd9c29e53bbdf4cef10942608e4ba405294733072b4aa624c0238e2ed8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Jan 23 11:31:22 compute-0 podman[106013]: 2026-01-23 11:31:22.812436572 +0000 UTC m=+0.028965662 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 23 11:31:22 compute-0 python3[105976]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 23 11:31:22 compute-0 sudo[105974]: pam_unix(sudo:session): session closed for user root
Jan 23 11:31:23 compute-0 sudo[106201]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-plixuspgjbeoehhzrsyshpsepwghzpfi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167883.1640313-430-102001912758629/AnsiballZ_stat.py'
Jan 23 11:31:23 compute-0 sudo[106201]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:31:23 compute-0 python3.9[106203]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 11:31:23 compute-0 sudo[106201]: pam_unix(sudo:session): session closed for user root
Jan 23 11:31:24 compute-0 sudo[106355]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cdkjdbthfrvtqgrsdftlravvcjytpesl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167883.9187093-439-24808354558562/AnsiballZ_file.py'
Jan 23 11:31:24 compute-0 sudo[106355]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:31:24 compute-0 python3.9[106357]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:31:24 compute-0 sudo[106355]: pam_unix(sudo:session): session closed for user root
Jan 23 11:31:24 compute-0 sudo[106431]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eclnwpidkhlafkoxorxeqhsgfglqhmty ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167883.9187093-439-24808354558562/AnsiballZ_stat.py'
Jan 23 11:31:24 compute-0 sudo[106431]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:31:24 compute-0 python3.9[106433]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 11:31:24 compute-0 sudo[106431]: pam_unix(sudo:session): session closed for user root
Jan 23 11:31:25 compute-0 sudo[106582]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zghpcmfnvdgrunjvbkovmuzdihkqmvga ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167884.8931096-439-73606473740030/AnsiballZ_copy.py'
Jan 23 11:31:25 compute-0 sudo[106582]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:31:25 compute-0 python3.9[106584]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769167884.8931096-439-73606473740030/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:31:25 compute-0 sudo[106582]: pam_unix(sudo:session): session closed for user root
Jan 23 11:31:25 compute-0 sudo[106658]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ycsgwexdaxhzxwfstfqxttiyordjxmar ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167884.8931096-439-73606473740030/AnsiballZ_systemd.py'
Jan 23 11:31:25 compute-0 sudo[106658]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:31:25 compute-0 python3.9[106660]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 23 11:31:26 compute-0 systemd[1]: Reloading.
Jan 23 11:31:26 compute-0 systemd-rc-local-generator[106685]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 11:31:26 compute-0 systemd-sysv-generator[106690]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 11:31:26 compute-0 sudo[106658]: pam_unix(sudo:session): session closed for user root
Jan 23 11:31:26 compute-0 sudo[106768]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fmdwwakqlmxsyklmgvzlawazfphehazl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167884.8931096-439-73606473740030/AnsiballZ_systemd.py'
Jan 23 11:31:26 compute-0 sudo[106768]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:31:26 compute-0 python3.9[106770]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 11:31:26 compute-0 systemd[1]: Reloading.
Jan 23 11:31:26 compute-0 systemd-rc-local-generator[106800]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 11:31:26 compute-0 systemd-sysv-generator[106803]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 11:31:27 compute-0 systemd[1]: Starting ovn_metadata_agent container...
Jan 23 11:31:27 compute-0 systemd[1]: Started libcrun container.
Jan 23 11:31:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffe768f17f1c5cabc4749172dde98037eefa48ab7b78fed50c0db6e168c805d5/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Jan 23 11:31:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffe768f17f1c5cabc4749172dde98037eefa48ab7b78fed50c0db6e168c805d5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 23 11:31:27 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run d96827cd9c29e53bbdf4cef10942608e4ba405294733072b4aa624c0238e2ed8.
Jan 23 11:31:27 compute-0 podman[106811]: 2026-01-23 11:31:27.242778182 +0000 UTC m=+0.126203769 container init d96827cd9c29e53bbdf4cef10942608e4ba405294733072b4aa624c0238e2ed8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 23 11:31:27 compute-0 ovn_metadata_agent[106827]: + sudo -E kolla_set_configs
Jan 23 11:31:27 compute-0 podman[106811]: 2026-01-23 11:31:27.27161883 +0000 UTC m=+0.155044387 container start d96827cd9c29e53bbdf4cef10942608e4ba405294733072b4aa624c0238e2ed8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent)
Jan 23 11:31:27 compute-0 edpm-start-podman-container[106811]: ovn_metadata_agent
Jan 23 11:31:27 compute-0 ovn_metadata_agent[106827]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 23 11:31:27 compute-0 ovn_metadata_agent[106827]: INFO:__main__:Validating config file
Jan 23 11:31:27 compute-0 ovn_metadata_agent[106827]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 23 11:31:27 compute-0 ovn_metadata_agent[106827]: INFO:__main__:Copying service configuration files
Jan 23 11:31:27 compute-0 ovn_metadata_agent[106827]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Jan 23 11:31:27 compute-0 ovn_metadata_agent[106827]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Jan 23 11:31:27 compute-0 ovn_metadata_agent[106827]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Jan 23 11:31:27 compute-0 ovn_metadata_agent[106827]: INFO:__main__:Writing out command to execute
Jan 23 11:31:27 compute-0 ovn_metadata_agent[106827]: INFO:__main__:Setting permission for /var/lib/neutron
Jan 23 11:31:27 compute-0 ovn_metadata_agent[106827]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Jan 23 11:31:27 compute-0 ovn_metadata_agent[106827]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Jan 23 11:31:27 compute-0 ovn_metadata_agent[106827]: INFO:__main__:Setting permission for /var/lib/neutron/external
Jan 23 11:31:27 compute-0 ovn_metadata_agent[106827]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Jan 23 11:31:27 compute-0 ovn_metadata_agent[106827]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Jan 23 11:31:27 compute-0 ovn_metadata_agent[106827]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Jan 23 11:31:27 compute-0 edpm-start-podman-container[106810]: Creating additional drop-in dependency for "ovn_metadata_agent" (d96827cd9c29e53bbdf4cef10942608e4ba405294733072b4aa624c0238e2ed8)
Jan 23 11:31:27 compute-0 ovn_metadata_agent[106827]: ++ cat /run_command
Jan 23 11:31:27 compute-0 podman[106834]: 2026-01-23 11:31:27.32908485 +0000 UTC m=+0.047944301 container health_status d96827cd9c29e53bbdf4cef10942608e4ba405294733072b4aa624c0238e2ed8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 23 11:31:27 compute-0 ovn_metadata_agent[106827]: + CMD=neutron-ovn-metadata-agent
Jan 23 11:31:27 compute-0 ovn_metadata_agent[106827]: + ARGS=
Jan 23 11:31:27 compute-0 ovn_metadata_agent[106827]: + sudo kolla_copy_cacerts
Jan 23 11:31:27 compute-0 systemd[1]: Reloading.
Jan 23 11:31:27 compute-0 ovn_metadata_agent[106827]: + [[ ! -n '' ]]
Jan 23 11:31:27 compute-0 ovn_metadata_agent[106827]: + . kolla_extend_start
Jan 23 11:31:27 compute-0 ovn_metadata_agent[106827]: Running command: 'neutron-ovn-metadata-agent'
Jan 23 11:31:27 compute-0 ovn_metadata_agent[106827]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Jan 23 11:31:27 compute-0 ovn_metadata_agent[106827]: + umask 0022
Jan 23 11:31:27 compute-0 ovn_metadata_agent[106827]: + exec neutron-ovn-metadata-agent
Jan 23 11:31:27 compute-0 systemd-rc-local-generator[106901]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 11:31:27 compute-0 systemd-sysv-generator[106904]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 11:31:27 compute-0 systemd[1]: Started ovn_metadata_agent container.
Jan 23 11:31:27 compute-0 sudo[106768]: pam_unix(sudo:session): session closed for user root
Jan 23 11:31:28 compute-0 python3.9[107063]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 23 11:31:28 compute-0 sudo[107213]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lpifsevddxcnvlyffsplrylidqcgfnhd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167888.7164714-484-116535301488059/AnsiballZ_stat.py'
Jan 23 11:31:28 compute-0 sudo[107213]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.033 106832 INFO neutron.common.config [-] Logging enabled!
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.033 106832 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.033 106832 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.034 106832 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.034 106832 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.034 106832 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.034 106832 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.034 106832 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.034 106832 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.034 106832 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.034 106832 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.035 106832 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.035 106832 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.035 106832 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.035 106832 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.035 106832 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.035 106832 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.035 106832 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.035 106832 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.035 106832 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.036 106832 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.036 106832 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.036 106832 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.036 106832 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.036 106832 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.036 106832 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.036 106832 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.036 106832 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.036 106832 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.036 106832 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.037 106832 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.037 106832 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.037 106832 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.037 106832 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.037 106832 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.037 106832 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.037 106832 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.037 106832 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.037 106832 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.038 106832 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.038 106832 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.038 106832 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.038 106832 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.038 106832 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.038 106832 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.038 106832 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.038 106832 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.038 106832 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.039 106832 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.039 106832 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.039 106832 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.039 106832 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.039 106832 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.039 106832 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.039 106832 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.039 106832 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.039 106832 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.039 106832 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.040 106832 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.040 106832 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.040 106832 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.040 106832 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.040 106832 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.040 106832 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.040 106832 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.040 106832 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.041 106832 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.041 106832 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.041 106832 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.041 106832 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.041 106832 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.041 106832 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.041 106832 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.041 106832 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.042 106832 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.042 106832 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.042 106832 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.042 106832 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.042 106832 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.042 106832 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.042 106832 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.042 106832 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.042 106832 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.043 106832 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.043 106832 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.043 106832 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.043 106832 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.043 106832 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.043 106832 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.043 106832 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.043 106832 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.043 106832 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.043 106832 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.044 106832 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.044 106832 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.044 106832 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.044 106832 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.044 106832 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.044 106832 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.044 106832 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.044 106832 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.044 106832 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.044 106832 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.045 106832 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.045 106832 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.045 106832 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.045 106832 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.045 106832 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.045 106832 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.045 106832 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.045 106832 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.045 106832 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.046 106832 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.046 106832 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.046 106832 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.046 106832 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.046 106832 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.046 106832 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.046 106832 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.046 106832 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.046 106832 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.047 106832 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.047 106832 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.047 106832 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.047 106832 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.047 106832 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.047 106832 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.047 106832 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.047 106832 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.047 106832 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.048 106832 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.048 106832 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.048 106832 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.048 106832 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.048 106832 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.048 106832 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.048 106832 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.049 106832 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.049 106832 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.049 106832 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.049 106832 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.049 106832 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.049 106832 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.049 106832 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.049 106832 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.049 106832 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.050 106832 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.050 106832 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.050 106832 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.050 106832 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.050 106832 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.050 106832 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.050 106832 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.050 106832 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.050 106832 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.050 106832 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.051 106832 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.051 106832 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.051 106832 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.051 106832 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.051 106832 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.051 106832 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.051 106832 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.051 106832 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.051 106832 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.052 106832 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.052 106832 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.052 106832 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.052 106832 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.052 106832 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.052 106832 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.052 106832 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.052 106832 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.052 106832 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.053 106832 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.053 106832 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.053 106832 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.053 106832 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.053 106832 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.053 106832 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.053 106832 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.053 106832 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.054 106832 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.054 106832 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.054 106832 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.054 106832 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.054 106832 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.054 106832 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.054 106832 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.054 106832 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.055 106832 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.055 106832 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.055 106832 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.055 106832 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.055 106832 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.055 106832 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.055 106832 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.055 106832 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.056 106832 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.056 106832 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.056 106832 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.056 106832 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.056 106832 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.056 106832 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.056 106832 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.056 106832 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.057 106832 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.057 106832 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.057 106832 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.057 106832 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.057 106832 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.057 106832 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.057 106832 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.057 106832 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.057 106832 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.058 106832 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.058 106832 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.058 106832 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.058 106832 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.058 106832 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.058 106832 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.058 106832 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.058 106832 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.059 106832 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.059 106832 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.059 106832 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.059 106832 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.059 106832 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.059 106832 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.059 106832 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.059 106832 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.059 106832 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.059 106832 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.060 106832 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.060 106832 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.060 106832 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.060 106832 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.060 106832 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.060 106832 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.060 106832 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.060 106832 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.060 106832 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.061 106832 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.061 106832 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.061 106832 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.061 106832 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.061 106832 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.061 106832 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.061 106832 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.061 106832 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.062 106832 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.062 106832 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.062 106832 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.062 106832 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.062 106832 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.062 106832 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.062 106832 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.062 106832 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.063 106832 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.063 106832 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.063 106832 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.063 106832 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.063 106832 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.063 106832 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.063 106832 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.063 106832 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.063 106832 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.064 106832 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.064 106832 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.064 106832 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.064 106832 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.064 106832 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.064 106832 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.064 106832 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.064 106832 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.064 106832 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.065 106832 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.065 106832 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.065 106832 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.065 106832 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.065 106832 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.065 106832 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.066 106832 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.066 106832 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.066 106832 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.066 106832 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.066 106832 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.066 106832 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.066 106832 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.066 106832 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.066 106832 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.067 106832 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.067 106832 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.067 106832 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.067 106832 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.067 106832 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.067 106832 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.067 106832 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.067 106832 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.067 106832 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.068 106832 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.068 106832 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.068 106832 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.068 106832 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.076 106832 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.076 106832 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.076 106832 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.077 106832 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.077 106832 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.089 106832 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name 9a136bfd-345f-428f-a7f6-d55531120214 (UUID: 9a136bfd-345f-428f-a7f6-d55531120214) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.111 106832 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.111 106832 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.111 106832 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.112 106832 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.115 106832 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.120 106832 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.125 106832 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', '9a136bfd-345f-428f-a7f6-d55531120214'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7fceaba80790>], external_ids={}, name=9a136bfd-345f-428f-a7f6-d55531120214, nb_cfg_timestamp=1769167836956, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.125 106832 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7fceaba80130>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.126 106832 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.126 106832 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.126 106832 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.126 106832 INFO oslo_service.service [-] Starting 1 workers
Jan 23 11:31:29 compute-0 python3.9[107215]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.130 106832 DEBUG oslo_service.service [-] Started child 107216 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.133 106832 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmplke3aez4/privsep.sock']
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.135 107216 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-424763'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.157 107216 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.158 107216 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.158 107216 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.161 107216 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.169 107216 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.177 107216 INFO eventlet.wsgi.server [-] (107216) wsgi starting up on http:/var/lib/neutron/metadata_proxy
Jan 23 11:31:29 compute-0 sudo[107213]: pam_unix(sudo:session): session closed for user root
Jan 23 11:31:29 compute-0 podman[107217]: 2026-01-23 11:31:29.238322707 +0000 UTC m=+0.082061758 container health_status 1cc877fed4914980324cf4c0d6ba23743fd113442cee4d49cc1a59e402757170 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 23 11:31:29 compute-0 sudo[107369]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pipbmmwpmyustsxanweanaetastlzidc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167888.7164714-484-116535301488059/AnsiballZ_copy.py'
Jan 23 11:31:29 compute-0 sudo[107369]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:31:29 compute-0 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Jan 23 11:31:29 compute-0 python3.9[107371]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769167888.7164714-484-116535301488059/.source.yaml _original_basename=._gmot1h3 follow=False checksum=862b89703409a214795525019fea3636c0df0ef5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:31:29 compute-0 sudo[107369]: pam_unix(sudo:session): session closed for user root
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.752 106832 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.753 106832 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmplke3aez4/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.638 107372 INFO oslo.privsep.daemon [-] privsep daemon starting
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.642 107372 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.643 107372 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.644 107372 INFO oslo.privsep.daemon [-] privsep daemon running as pid 107372
Jan 23 11:31:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:29.755 107372 DEBUG oslo.privsep.daemon [-] privsep: reply[8fe819a8-2265-4f68-978e-c698d3eb7c0e]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 11:31:30 compute-0 sshd-session[98612]: Connection closed by 192.168.122.30 port 39750
Jan 23 11:31:30 compute-0 sshd-session[98609]: pam_unix(sshd:session): session closed for user zuul
Jan 23 11:31:30 compute-0 systemd[1]: session-22.scope: Deactivated successfully.
Jan 23 11:31:30 compute-0 systemd[1]: session-22.scope: Consumed 33.559s CPU time.
Jan 23 11:31:30 compute-0 systemd-logind[798]: Session 22 logged out. Waiting for processes to exit.
Jan 23 11:31:30 compute-0 systemd-logind[798]: Removed session 22.
Jan 23 11:31:30 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:30.214 107372 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 11:31:30 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:30.214 107372 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 11:31:30 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:30.215 107372 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 11:31:30 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:30.722 107372 DEBUG oslo.privsep.daemon [-] privsep: reply[990e75ce-3181-43d2-b281-1bac2e8e6e8b]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 11:31:30 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:30.724 106832 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=9a136bfd-345f-428f-a7f6-d55531120214, column=external_ids, values=({'neutron:ovn-metadata-id': 'b0edf57e-d755-5128-84c1-a94015b35b20'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 23 11:31:30 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:30.955 106832 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9a136bfd-345f-428f-a7f6-d55531120214, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.243 106832 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.243 106832 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.243 106832 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.244 106832 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.244 106832 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.244 106832 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.244 106832 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.245 106832 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.245 106832 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.245 106832 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.245 106832 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.246 106832 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.246 106832 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.246 106832 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.246 106832 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.247 106832 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.247 106832 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.247 106832 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.247 106832 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.248 106832 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.248 106832 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.248 106832 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.248 106832 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.249 106832 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.249 106832 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.249 106832 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.250 106832 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.250 106832 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.250 106832 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.251 106832 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.251 106832 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.251 106832 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.251 106832 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.252 106832 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.252 106832 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.252 106832 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.253 106832 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.253 106832 DEBUG oslo_service.service [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.254 106832 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.254 106832 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.254 106832 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.254 106832 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.255 106832 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.255 106832 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.255 106832 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.255 106832 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.256 106832 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.256 106832 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.256 106832 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.257 106832 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.257 106832 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.257 106832 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.257 106832 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.258 106832 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.258 106832 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.258 106832 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.258 106832 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.259 106832 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.259 106832 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.259 106832 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.259 106832 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.260 106832 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.260 106832 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.260 106832 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.260 106832 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.261 106832 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.261 106832 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.261 106832 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.261 106832 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.262 106832 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.262 106832 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.262 106832 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.262 106832 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.263 106832 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.263 106832 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.263 106832 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.264 106832 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.264 106832 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.264 106832 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.264 106832 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.265 106832 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.265 106832 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.265 106832 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.265 106832 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.266 106832 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.266 106832 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.266 106832 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.266 106832 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.267 106832 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.267 106832 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.267 106832 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.267 106832 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.268 106832 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.268 106832 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.268 106832 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.269 106832 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.269 106832 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.269 106832 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.269 106832 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.270 106832 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.270 106832 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.270 106832 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.270 106832 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.271 106832 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.271 106832 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.271 106832 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.271 106832 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.271 106832 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.272 106832 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.272 106832 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.272 106832 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.273 106832 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.273 106832 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.273 106832 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.273 106832 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.274 106832 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.274 106832 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.274 106832 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.274 106832 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.274 106832 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.275 106832 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.275 106832 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.275 106832 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.275 106832 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.276 106832 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.276 106832 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.276 106832 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.276 106832 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.277 106832 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.277 106832 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.277 106832 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.277 106832 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.278 106832 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.278 106832 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.278 106832 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.278 106832 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.279 106832 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.279 106832 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.279 106832 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.279 106832 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.280 106832 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.280 106832 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.280 106832 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.280 106832 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.281 106832 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.281 106832 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.281 106832 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.281 106832 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.282 106832 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.282 106832 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.282 106832 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.282 106832 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.283 106832 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.283 106832 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.283 106832 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.283 106832 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.283 106832 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.284 106832 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.284 106832 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.284 106832 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.284 106832 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.284 106832 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.285 106832 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.285 106832 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.285 106832 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.285 106832 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.286 106832 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.286 106832 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.286 106832 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.286 106832 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.286 106832 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.287 106832 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.287 106832 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.287 106832 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.287 106832 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.288 106832 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.288 106832 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.288 106832 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.289 106832 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.289 106832 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.289 106832 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.289 106832 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.290 106832 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.290 106832 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.290 106832 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.290 106832 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.291 106832 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.291 106832 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.291 106832 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.292 106832 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.292 106832 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.292 106832 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.293 106832 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.293 106832 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.293 106832 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.293 106832 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.294 106832 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.294 106832 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.294 106832 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.294 106832 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.294 106832 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.294 106832 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.295 106832 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.295 106832 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.295 106832 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.295 106832 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.295 106832 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.295 106832 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.295 106832 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.296 106832 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.296 106832 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.296 106832 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.296 106832 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.296 106832 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.296 106832 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.296 106832 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.297 106832 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.297 106832 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.297 106832 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.297 106832 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.297 106832 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.297 106832 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.297 106832 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.298 106832 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.298 106832 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.298 106832 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.298 106832 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.298 106832 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.298 106832 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.298 106832 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.299 106832 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.299 106832 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.299 106832 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.299 106832 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.299 106832 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.300 106832 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.300 106832 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.300 106832 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.300 106832 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.300 106832 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.301 106832 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.301 106832 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.301 106832 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.301 106832 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.301 106832 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.302 106832 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.302 106832 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.302 106832 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.302 106832 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.302 106832 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.303 106832 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.303 106832 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.303 106832 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.303 106832 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.303 106832 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.303 106832 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.304 106832 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.304 106832 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.304 106832 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.304 106832 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.304 106832 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.305 106832 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.305 106832 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.305 106832 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.305 106832 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.305 106832 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.306 106832 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.306 106832 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.306 106832 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.306 106832 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.306 106832 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.307 106832 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.307 106832 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.307 106832 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.307 106832 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.307 106832 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.308 106832 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.308 106832 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.308 106832 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.308 106832 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.308 106832 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.308 106832 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.309 106832 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.309 106832 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.309 106832 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.309 106832 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.309 106832 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.309 106832 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.310 106832 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.310 106832 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.310 106832 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.310 106832 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.310 106832 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.310 106832 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.311 106832 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.311 106832 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.311 106832 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.311 106832 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.311 106832 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.311 106832 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.312 106832 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.312 106832 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.312 106832 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:31:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:31:31.312 106832 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 23 11:31:38 compute-0 sshd-session[107401]: Accepted publickey for zuul from 192.168.122.30 port 52874 ssh2: ECDSA SHA256:AUEDGm/wgPOySUg5KweIs4KJvJDZMkuE7T7y2BxO92Y
Jan 23 11:31:38 compute-0 systemd-logind[798]: New session 23 of user zuul.
Jan 23 11:31:38 compute-0 systemd[1]: Started Session 23 of User zuul.
Jan 23 11:31:38 compute-0 sshd-session[107401]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 23 11:31:39 compute-0 python3.9[107554]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 11:31:40 compute-0 sudo[107708]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vazkayvifygurakjbvymguqmvdbqbgfz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167900.151357-29-8898473204725/AnsiballZ_command.py'
Jan 23 11:31:40 compute-0 sudo[107708]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:31:40 compute-0 python3.9[107710]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 11:31:40 compute-0 sudo[107708]: pam_unix(sudo:session): session closed for user root
Jan 23 11:31:41 compute-0 sudo[107873]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-slbatxklqmawtkezjmvujwofpdkpwziy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167901.2724106-40-160985300487857/AnsiballZ_systemd_service.py'
Jan 23 11:31:41 compute-0 sudo[107873]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:31:42 compute-0 python3.9[107875]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 23 11:31:42 compute-0 systemd[1]: Reloading.
Jan 23 11:31:42 compute-0 systemd-rc-local-generator[107902]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 11:31:42 compute-0 systemd-sysv-generator[107907]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 11:31:42 compute-0 sudo[107873]: pam_unix(sudo:session): session closed for user root
Jan 23 11:31:43 compute-0 python3.9[108060]: ansible-ansible.builtin.service_facts Invoked
Jan 23 11:31:43 compute-0 network[108077]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 23 11:31:43 compute-0 network[108078]: 'network-scripts' will be removed from distribution in near future.
Jan 23 11:31:43 compute-0 network[108079]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 23 11:31:47 compute-0 sudo[108338]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oevsydtfvoedhfvxyfvxqegwidinsspv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167906.8352196-59-56334659196960/AnsiballZ_systemd_service.py'
Jan 23 11:31:47 compute-0 sudo[108338]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:31:47 compute-0 python3.9[108340]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 11:31:47 compute-0 sudo[108338]: pam_unix(sudo:session): session closed for user root
Jan 23 11:31:47 compute-0 sudo[108491]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qaktbtaofugmoaegakcntcvtzjfdbcyd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167907.5370498-59-124862974567838/AnsiballZ_systemd_service.py'
Jan 23 11:31:47 compute-0 sudo[108491]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:31:48 compute-0 python3.9[108493]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 11:31:48 compute-0 sudo[108491]: pam_unix(sudo:session): session closed for user root
Jan 23 11:31:48 compute-0 sudo[108644]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zzidakzvoczsvrcyssfeqlvwlberduvc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167908.2292063-59-153124352005757/AnsiballZ_systemd_service.py'
Jan 23 11:31:48 compute-0 sudo[108644]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:31:48 compute-0 python3.9[108646]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 11:31:49 compute-0 sudo[108644]: pam_unix(sudo:session): session closed for user root
Jan 23 11:31:50 compute-0 sudo[108797]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ypronqdqddnknwwgyzgyfabkhwxoogpj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167909.9189446-59-34582529798647/AnsiballZ_systemd_service.py'
Jan 23 11:31:50 compute-0 sudo[108797]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:31:50 compute-0 python3.9[108799]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 11:31:50 compute-0 sudo[108797]: pam_unix(sudo:session): session closed for user root
Jan 23 11:31:50 compute-0 sudo[108950]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dpvekmbwycweqkdqwlnpnbffdtggdqca ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167910.6333494-59-233368188702270/AnsiballZ_systemd_service.py'
Jan 23 11:31:50 compute-0 sudo[108950]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:31:51 compute-0 python3.9[108952]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 11:31:51 compute-0 sudo[108950]: pam_unix(sudo:session): session closed for user root
Jan 23 11:31:51 compute-0 sudo[109103]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hkvebecpcjuqyvohzlxwhnburgrxbtdw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167911.4726517-59-270132967612458/AnsiballZ_systemd_service.py'
Jan 23 11:31:51 compute-0 sudo[109103]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:31:52 compute-0 python3.9[109105]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 11:31:52 compute-0 sudo[109103]: pam_unix(sudo:session): session closed for user root
Jan 23 11:31:52 compute-0 sudo[109256]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mczhworgkgnohdykvgljjrwipjjqowxl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167912.2013104-59-56809059138878/AnsiballZ_systemd_service.py'
Jan 23 11:31:52 compute-0 sudo[109256]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:31:52 compute-0 python3.9[109258]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 11:31:52 compute-0 sudo[109256]: pam_unix(sudo:session): session closed for user root
Jan 23 11:31:53 compute-0 sudo[109409]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-andnfjwznqdgqczpjhtdpbileociekmk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167913.0477204-111-80000042423603/AnsiballZ_file.py'
Jan 23 11:31:53 compute-0 sudo[109409]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:31:53 compute-0 python3.9[109411]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:31:53 compute-0 sudo[109409]: pam_unix(sudo:session): session closed for user root
Jan 23 11:31:54 compute-0 sudo[109561]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ydekjlhnpedlabwdtikynzlziumwbozr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167913.9617126-111-211260187914406/AnsiballZ_file.py'
Jan 23 11:31:54 compute-0 sudo[109561]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:31:54 compute-0 python3.9[109563]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:31:54 compute-0 sudo[109561]: pam_unix(sudo:session): session closed for user root
Jan 23 11:31:54 compute-0 sudo[109713]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nbxxrcwoljydkjrtwpxwajtutnaiyfxt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167914.5204315-111-97856986389265/AnsiballZ_file.py'
Jan 23 11:31:54 compute-0 sudo[109713]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:31:55 compute-0 python3.9[109715]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:31:55 compute-0 sudo[109713]: pam_unix(sudo:session): session closed for user root
Jan 23 11:31:55 compute-0 sudo[109865]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ufawsyprtnjsxulddcqxxqbdcmnmdahg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167915.169183-111-181812282264791/AnsiballZ_file.py'
Jan 23 11:31:55 compute-0 sudo[109865]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:31:55 compute-0 python3.9[109867]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:31:55 compute-0 sudo[109865]: pam_unix(sudo:session): session closed for user root
Jan 23 11:31:55 compute-0 sudo[110017]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vthbprirgqzftjpbcyieoznfhecuhnyp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167915.7177646-111-79967460588748/AnsiballZ_file.py'
Jan 23 11:31:55 compute-0 sudo[110017]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:31:56 compute-0 python3.9[110019]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:31:56 compute-0 sudo[110017]: pam_unix(sudo:session): session closed for user root
Jan 23 11:31:56 compute-0 sudo[110169]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-diymqfsaedhjrkhmfaqjbfefdzdkqgvr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167916.2610414-111-218117755968062/AnsiballZ_file.py'
Jan 23 11:31:56 compute-0 sudo[110169]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:31:56 compute-0 python3.9[110171]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:31:56 compute-0 sudo[110169]: pam_unix(sudo:session): session closed for user root
Jan 23 11:31:57 compute-0 sudo[110321]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dqzpiwxdaqyuvjjgooozhiasqxpsxudc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167916.7859242-111-212186766604383/AnsiballZ_file.py'
Jan 23 11:31:57 compute-0 sudo[110321]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:31:57 compute-0 python3.9[110323]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:31:57 compute-0 sudo[110321]: pam_unix(sudo:session): session closed for user root
Jan 23 11:31:57 compute-0 sudo[110485]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bnxfxmaznpxtbchjxtesikiahibixluy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167917.437766-161-186009299656481/AnsiballZ_file.py'
Jan 23 11:31:57 compute-0 sudo[110485]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:31:57 compute-0 podman[110447]: 2026-01-23 11:31:57.770556623 +0000 UTC m=+0.093121659 container health_status d96827cd9c29e53bbdf4cef10942608e4ba405294733072b4aa624c0238e2ed8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 11:31:57 compute-0 python3.9[110491]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:31:57 compute-0 sudo[110485]: pam_unix(sudo:session): session closed for user root
Jan 23 11:31:58 compute-0 sudo[110644]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zzitdfmmhxbexyachporqkakzugittpd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167918.0809755-161-186637268583281/AnsiballZ_file.py'
Jan 23 11:31:58 compute-0 sudo[110644]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:31:58 compute-0 python3.9[110646]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:31:58 compute-0 sudo[110644]: pam_unix(sudo:session): session closed for user root
Jan 23 11:31:58 compute-0 sudo[110796]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-evtkfnuamlrtyteeebgkwjiyblcuxqyb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167918.6705096-161-245002535010495/AnsiballZ_file.py'
Jan 23 11:31:58 compute-0 sudo[110796]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:31:59 compute-0 python3.9[110798]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:31:59 compute-0 sudo[110796]: pam_unix(sudo:session): session closed for user root
Jan 23 11:31:59 compute-0 sudo[110958]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xomiufonbdodgehwgociwuzbtxrbepch ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167919.2398348-161-25342291183437/AnsiballZ_file.py'
Jan 23 11:31:59 compute-0 sudo[110958]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:31:59 compute-0 podman[110922]: 2026-01-23 11:31:59.570329974 +0000 UTC m=+0.082492370 container health_status 1cc877fed4914980324cf4c0d6ba23743fd113442cee4d49cc1a59e402757170 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 23 11:31:59 compute-0 python3.9[110964]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:31:59 compute-0 sudo[110958]: pam_unix(sudo:session): session closed for user root
Jan 23 11:32:00 compute-0 sudo[111125]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qizbpvwhfhtuvuzlgumhzlpzqdynkwhf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167919.823685-161-278124043894082/AnsiballZ_file.py'
Jan 23 11:32:00 compute-0 sudo[111125]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:32:00 compute-0 python3.9[111127]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:32:00 compute-0 sudo[111125]: pam_unix(sudo:session): session closed for user root
Jan 23 11:32:00 compute-0 sudo[111277]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pqyvzkhzdqbodmmcvwrwofhvdyytxbtg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167920.43518-161-66419277315198/AnsiballZ_file.py'
Jan 23 11:32:00 compute-0 sudo[111277]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:32:00 compute-0 python3.9[111279]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:32:00 compute-0 sudo[111277]: pam_unix(sudo:session): session closed for user root
Jan 23 11:32:01 compute-0 sudo[111429]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rkfazeqehfwfersirujwafrvmdnlawvp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167921.0422096-161-261723212073434/AnsiballZ_file.py'
Jan 23 11:32:01 compute-0 sudo[111429]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:32:01 compute-0 python3.9[111431]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:32:01 compute-0 sudo[111429]: pam_unix(sudo:session): session closed for user root
Jan 23 11:32:02 compute-0 sudo[111581]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hhjuiylsigeokcofomljjxqiivebypvo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167921.764821-212-212973293444575/AnsiballZ_command.py'
Jan 23 11:32:02 compute-0 sudo[111581]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:32:02 compute-0 python3.9[111583]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 11:32:02 compute-0 sudo[111581]: pam_unix(sudo:session): session closed for user root
Jan 23 11:32:03 compute-0 python3.9[111735]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 23 11:32:03 compute-0 sudo[111885]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ijxxtrypuoilmbtehbtwrstvcsuynnjo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167923.267232-230-111393747247300/AnsiballZ_systemd_service.py'
Jan 23 11:32:03 compute-0 sudo[111885]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:32:03 compute-0 python3.9[111887]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 23 11:32:03 compute-0 systemd[1]: Reloading.
Jan 23 11:32:03 compute-0 systemd-rc-local-generator[111912]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 11:32:03 compute-0 systemd-sysv-generator[111916]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 11:32:04 compute-0 sudo[111885]: pam_unix(sudo:session): session closed for user root
Jan 23 11:32:04 compute-0 sudo[112072]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zhflndrnelhivpxlxljpvogsmufbphft ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167924.1925104-238-64132579363233/AnsiballZ_command.py'
Jan 23 11:32:04 compute-0 sudo[112072]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:32:04 compute-0 python3.9[112074]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 11:32:04 compute-0 sudo[112072]: pam_unix(sudo:session): session closed for user root
Jan 23 11:32:05 compute-0 sudo[112225]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pcsbigydluzabxrgpacyljjyabzknsmn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167924.7465932-238-32703926141011/AnsiballZ_command.py'
Jan 23 11:32:05 compute-0 sudo[112225]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:32:05 compute-0 python3.9[112227]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 11:32:05 compute-0 sudo[112225]: pam_unix(sudo:session): session closed for user root
Jan 23 11:32:05 compute-0 sudo[112378]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vtuepxfdemroesfbmajtpssrhengwfwh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167925.3662636-238-253154095940162/AnsiballZ_command.py'
Jan 23 11:32:05 compute-0 sudo[112378]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:32:05 compute-0 python3.9[112380]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 11:32:05 compute-0 sudo[112378]: pam_unix(sudo:session): session closed for user root
Jan 23 11:32:06 compute-0 sudo[112531]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wmczlmqvhxzxvtnkhtnfcgadwnafgojl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167925.9301896-238-189604417307708/AnsiballZ_command.py'
Jan 23 11:32:06 compute-0 sudo[112531]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:32:06 compute-0 python3.9[112533]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 11:32:06 compute-0 sudo[112531]: pam_unix(sudo:session): session closed for user root
Jan 23 11:32:06 compute-0 sudo[112684]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pvfoxecwgfigrvqdohllljqlepdfljlz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167926.4705067-238-237568998050761/AnsiballZ_command.py'
Jan 23 11:32:06 compute-0 sudo[112684]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:32:06 compute-0 python3.9[112686]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 11:32:06 compute-0 sudo[112684]: pam_unix(sudo:session): session closed for user root
Jan 23 11:32:07 compute-0 sudo[112837]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jwpwszghvrstntyyjkcmbcyrztfgkhww ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167927.1353993-238-257724802051868/AnsiballZ_command.py'
Jan 23 11:32:07 compute-0 sudo[112837]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:32:07 compute-0 python3.9[112839]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 11:32:07 compute-0 sudo[112837]: pam_unix(sudo:session): session closed for user root
Jan 23 11:32:08 compute-0 sudo[112990]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jdzgkraxkkiuggzslewctcztgltmlczq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167927.735286-238-270354125922692/AnsiballZ_command.py'
Jan 23 11:32:08 compute-0 sudo[112990]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:32:08 compute-0 python3.9[112992]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 11:32:08 compute-0 sudo[112990]: pam_unix(sudo:session): session closed for user root
Jan 23 11:32:09 compute-0 sudo[113143]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tmkgmxoepfqicrwuykfropkxvzosjfji ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167928.586546-292-32406750308038/AnsiballZ_getent.py'
Jan 23 11:32:09 compute-0 sudo[113143]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:32:09 compute-0 python3.9[113145]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Jan 23 11:32:09 compute-0 sudo[113143]: pam_unix(sudo:session): session closed for user root
Jan 23 11:32:09 compute-0 sudo[113296]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kzznkwaycxmpytqezyefpxnnzinzavda ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167929.4010525-300-212459239358584/AnsiballZ_group.py'
Jan 23 11:32:09 compute-0 sudo[113296]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:32:10 compute-0 python3.9[113298]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 23 11:32:10 compute-0 groupadd[113299]: group added to /etc/group: name=libvirt, GID=42473
Jan 23 11:32:10 compute-0 groupadd[113299]: group added to /etc/gshadow: name=libvirt
Jan 23 11:32:10 compute-0 groupadd[113299]: new group: name=libvirt, GID=42473
Jan 23 11:32:10 compute-0 sudo[113296]: pam_unix(sudo:session): session closed for user root
Jan 23 11:32:10 compute-0 sudo[113454]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-npowyftcszpyohvcwwavzmgdhjhgvflr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167930.326304-308-227464298173893/AnsiballZ_user.py'
Jan 23 11:32:10 compute-0 sudo[113454]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:32:11 compute-0 python3.9[113456]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 23 11:32:11 compute-0 useradd[113458]: new user: name=libvirt, UID=42473, GID=42473, home=/home/libvirt, shell=/sbin/nologin, from=/dev/pts/0
Jan 23 11:32:11 compute-0 sudo[113454]: pam_unix(sudo:session): session closed for user root
Jan 23 11:32:11 compute-0 sudo[113614]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmnspgvbybuljtfarellrdcbhoevlbph ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167931.3455255-319-150808120798993/AnsiballZ_setup.py'
Jan 23 11:32:11 compute-0 sudo[113614]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:32:11 compute-0 python3.9[113616]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 23 11:32:12 compute-0 sudo[113614]: pam_unix(sudo:session): session closed for user root
Jan 23 11:32:12 compute-0 sudo[113698]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-okmfchqruuncespsjlxlroinzjacoceh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769167931.3455255-319-150808120798993/AnsiballZ_dnf.py'
Jan 23 11:32:12 compute-0 sudo[113698]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:32:12 compute-0 python3.9[113700]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 23 11:32:28 compute-0 podman[113890]: 2026-01-23 11:32:28.756562502 +0000 UTC m=+0.080730147 container health_status d96827cd9c29e53bbdf4cef10942608e4ba405294733072b4aa624c0238e2ed8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 23 11:32:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:32:29.070 106832 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 11:32:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:32:29.071 106832 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 11:32:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:32:29.071 106832 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 11:32:29 compute-0 podman[113910]: 2026-01-23 11:32:29.766129282 +0000 UTC m=+0.094974321 container health_status 1cc877fed4914980324cf4c0d6ba23743fd113442cee4d49cc1a59e402757170 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.license=GPLv2)
Jan 23 11:32:33 compute-0 sshd-session[113936]: Invalid user sol from 193.32.162.146 port 38546
Jan 23 11:32:33 compute-0 sshd-session[113936]: Connection closed by invalid user sol 193.32.162.146 port 38546 [preauth]
Jan 23 11:32:35 compute-0 kernel: SELinux:  Converting 2763 SID table entries...
Jan 23 11:32:35 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 23 11:32:35 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 23 11:32:35 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 23 11:32:35 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 23 11:32:35 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 23 11:32:35 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 23 11:32:35 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 23 11:32:44 compute-0 kernel: SELinux:  Converting 2763 SID table entries...
Jan 23 11:32:44 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 23 11:32:44 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 23 11:32:44 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 23 11:32:44 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 23 11:32:44 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 23 11:32:44 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 23 11:32:44 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 23 11:32:59 compute-0 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Jan 23 11:32:59 compute-0 podman[116198]: 2026-01-23 11:32:59.726949272 +0000 UTC m=+0.051201201 container health_status d96827cd9c29e53bbdf4cef10942608e4ba405294733072b4aa624c0238e2ed8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 23 11:33:00 compute-0 podman[116875]: 2026-01-23 11:33:00.79457341 +0000 UTC m=+0.111892111 container health_status 1cc877fed4914980324cf4c0d6ba23743fd113442cee4d49cc1a59e402757170 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 23 11:33:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:33:29.072 106832 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 11:33:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:33:29.072 106832 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 11:33:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:33:29.073 106832 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 11:33:30 compute-0 podman[130868]: 2026-01-23 11:33:30.735059195 +0000 UTC m=+0.072347895 container health_status d96827cd9c29e53bbdf4cef10942608e4ba405294733072b4aa624c0238e2ed8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, io.buildah.version=1.41.3)
Jan 23 11:33:31 compute-0 podman[130887]: 2026-01-23 11:33:31.746227963 +0000 UTC m=+0.083231996 container health_status 1cc877fed4914980324cf4c0d6ba23743fd113442cee4d49cc1a59e402757170 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 23 11:33:34 compute-0 kernel: SELinux:  Converting 2764 SID table entries...
Jan 23 11:33:34 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 23 11:33:34 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 23 11:33:34 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 23 11:33:34 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 23 11:33:34 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 23 11:33:34 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 23 11:33:34 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 23 11:33:35 compute-0 groupadd[130925]: group added to /etc/group: name=dnsmasq, GID=993
Jan 23 11:33:35 compute-0 groupadd[130925]: group added to /etc/gshadow: name=dnsmasq
Jan 23 11:33:35 compute-0 groupadd[130925]: new group: name=dnsmasq, GID=993
Jan 23 11:33:35 compute-0 useradd[130932]: new user: name=dnsmasq, UID=992, GID=993, home=/var/lib/dnsmasq, shell=/usr/sbin/nologin, from=none
Jan 23 11:33:35 compute-0 dbus-broker-launch[751]: Noticed file-system modification, trigger reload.
Jan 23 11:33:35 compute-0 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=12 res=1
Jan 23 11:33:35 compute-0 dbus-broker-launch[751]: Noticed file-system modification, trigger reload.
Jan 23 11:33:36 compute-0 groupadd[130945]: group added to /etc/group: name=clevis, GID=992
Jan 23 11:33:36 compute-0 groupadd[130945]: group added to /etc/gshadow: name=clevis
Jan 23 11:33:36 compute-0 groupadd[130945]: new group: name=clevis, GID=992
Jan 23 11:33:36 compute-0 useradd[130952]: new user: name=clevis, UID=991, GID=992, home=/var/cache/clevis, shell=/usr/sbin/nologin, from=none
Jan 23 11:33:36 compute-0 usermod[130962]: add 'clevis' to group 'tss'
Jan 23 11:33:36 compute-0 usermod[130962]: add 'clevis' to shadow group 'tss'
Jan 23 11:33:38 compute-0 polkitd[43551]: Reloading rules
Jan 23 11:33:38 compute-0 polkitd[43551]: Collecting garbage unconditionally...
Jan 23 11:33:38 compute-0 polkitd[43551]: Loading rules from directory /etc/polkit-1/rules.d
Jan 23 11:33:38 compute-0 polkitd[43551]: Loading rules from directory /usr/share/polkit-1/rules.d
Jan 23 11:33:38 compute-0 polkitd[43551]: Finished loading, compiling and executing 3 rules
Jan 23 11:33:38 compute-0 polkitd[43551]: Reloading rules
Jan 23 11:33:38 compute-0 polkitd[43551]: Collecting garbage unconditionally...
Jan 23 11:33:38 compute-0 polkitd[43551]: Loading rules from directory /etc/polkit-1/rules.d
Jan 23 11:33:38 compute-0 polkitd[43551]: Loading rules from directory /usr/share/polkit-1/rules.d
Jan 23 11:33:38 compute-0 polkitd[43551]: Finished loading, compiling and executing 3 rules
Jan 23 11:33:39 compute-0 groupadd[131152]: group added to /etc/group: name=ceph, GID=167
Jan 23 11:33:39 compute-0 groupadd[131152]: group added to /etc/gshadow: name=ceph
Jan 23 11:33:39 compute-0 groupadd[131152]: new group: name=ceph, GID=167
Jan 23 11:33:39 compute-0 useradd[131158]: new user: name=ceph, UID=167, GID=167, home=/var/lib/ceph, shell=/sbin/nologin, from=none
Jan 23 11:33:42 compute-0 systemd[1]: Stopping OpenSSH server daemon...
Jan 23 11:33:42 compute-0 sshd[1007]: Received signal 15; terminating.
Jan 23 11:33:42 compute-0 systemd[1]: sshd.service: Deactivated successfully.
Jan 23 11:33:42 compute-0 systemd[1]: Stopped OpenSSH server daemon.
Jan 23 11:33:42 compute-0 systemd[1]: sshd.service: Consumed 1.625s CPU time, read 32.0K from disk, written 36.0K to disk.
Jan 23 11:33:42 compute-0 systemd[1]: Stopped target sshd-keygen.target.
Jan 23 11:33:42 compute-0 systemd[1]: Stopping sshd-keygen.target...
Jan 23 11:33:42 compute-0 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 23 11:33:42 compute-0 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 23 11:33:42 compute-0 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 23 11:33:42 compute-0 systemd[1]: Reached target sshd-keygen.target.
Jan 23 11:33:42 compute-0 systemd[1]: Starting OpenSSH server daemon...
Jan 23 11:33:42 compute-0 sshd[131677]: Server listening on 0.0.0.0 port 22.
Jan 23 11:33:42 compute-0 sshd[131677]: Server listening on :: port 22.
Jan 23 11:33:42 compute-0 systemd[1]: Started OpenSSH server daemon.
Jan 23 11:33:44 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 23 11:33:44 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 23 11:33:44 compute-0 systemd[1]: Reloading.
Jan 23 11:33:44 compute-0 systemd-rc-local-generator[131933]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 11:33:44 compute-0 systemd-sysv-generator[131936]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 11:33:44 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 23 11:33:47 compute-0 sudo[113698]: pam_unix(sudo:session): session closed for user root
Jan 23 11:33:47 compute-0 sudo[136768]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gksvutxrepruspbexttzhdvcqejaoczs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168027.219666-331-141692026857445/AnsiballZ_systemd.py'
Jan 23 11:33:47 compute-0 sudo[136768]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:33:48 compute-0 python3.9[136794]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 23 11:33:48 compute-0 systemd[1]: Reloading.
Jan 23 11:33:48 compute-0 systemd-rc-local-generator[137279]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 11:33:48 compute-0 systemd-sysv-generator[137284]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 11:33:48 compute-0 sudo[136768]: pam_unix(sudo:session): session closed for user root
Jan 23 11:33:48 compute-0 sudo[138074]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jqozthwyytzhfmjorjiqmiphcyfcfpfd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168028.5573182-331-166267478671441/AnsiballZ_systemd.py'
Jan 23 11:33:48 compute-0 sudo[138074]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:33:49 compute-0 python3.9[138100]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 23 11:33:49 compute-0 systemd[1]: Reloading.
Jan 23 11:33:49 compute-0 systemd-rc-local-generator[138566]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 11:33:49 compute-0 systemd-sysv-generator[138569]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 11:33:49 compute-0 sudo[138074]: pam_unix(sudo:session): session closed for user root
Jan 23 11:33:49 compute-0 sudo[139581]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-naybqknkwbfuknjtqhkojgsohmvcbrbz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168029.6780162-331-144270647929340/AnsiballZ_systemd.py'
Jan 23 11:33:49 compute-0 sudo[139581]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:33:50 compute-0 python3.9[139605]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 23 11:33:50 compute-0 systemd[1]: Reloading.
Jan 23 11:33:50 compute-0 systemd-rc-local-generator[140035]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 11:33:50 compute-0 systemd-sysv-generator[140042]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 11:33:50 compute-0 sudo[139581]: pam_unix(sudo:session): session closed for user root
Jan 23 11:33:51 compute-0 sudo[140803]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gpafkenxtdeutccfxtlaibnklppfufpu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168030.8249454-331-274145372441626/AnsiballZ_systemd.py'
Jan 23 11:33:51 compute-0 sudo[140803]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:33:51 compute-0 python3.9[140820]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 23 11:33:51 compute-0 systemd[1]: Reloading.
Jan 23 11:33:51 compute-0 systemd-rc-local-generator[141064]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 11:33:51 compute-0 systemd-sysv-generator[141067]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 11:33:51 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 23 11:33:51 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 23 11:33:51 compute-0 systemd[1]: man-db-cache-update.service: Consumed 9.341s CPU time.
Jan 23 11:33:51 compute-0 systemd[1]: run-r87a610161b3943e39c421317cc8facc4.service: Deactivated successfully.
Jan 23 11:33:51 compute-0 sudo[140803]: pam_unix(sudo:session): session closed for user root
Jan 23 11:33:52 compute-0 sudo[141222]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nzeoaxpxhagifpciyffswywrnxeflipe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168031.91475-360-112761073537651/AnsiballZ_systemd.py'
Jan 23 11:33:52 compute-0 sudo[141222]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:33:52 compute-0 python3.9[141224]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 23 11:33:52 compute-0 systemd[1]: Reloading.
Jan 23 11:33:52 compute-0 systemd-rc-local-generator[141251]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 11:33:52 compute-0 systemd-sysv-generator[141256]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 11:33:52 compute-0 sudo[141222]: pam_unix(sudo:session): session closed for user root
Jan 23 11:33:53 compute-0 sudo[141411]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wgapcxnlhpmerqdelaiwriquxtsovjlh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168032.9369822-360-196946467836226/AnsiballZ_systemd.py'
Jan 23 11:33:53 compute-0 sudo[141411]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:33:53 compute-0 python3.9[141413]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 23 11:33:53 compute-0 systemd[1]: Reloading.
Jan 23 11:33:53 compute-0 systemd-rc-local-generator[141441]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 11:33:53 compute-0 systemd-sysv-generator[141448]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 11:33:53 compute-0 sudo[141411]: pam_unix(sudo:session): session closed for user root
Jan 23 11:33:54 compute-0 sudo[141601]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-afzaszqwthgorjxngnjaqkpjyhoagata ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168034.0111032-360-63361594031755/AnsiballZ_systemd.py'
Jan 23 11:33:54 compute-0 sudo[141601]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:33:54 compute-0 python3.9[141603]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 23 11:33:54 compute-0 systemd[1]: Reloading.
Jan 23 11:33:54 compute-0 systemd-rc-local-generator[141637]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 11:33:54 compute-0 systemd-sysv-generator[141640]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 11:33:55 compute-0 sudo[141601]: pam_unix(sudo:session): session closed for user root
Jan 23 11:33:55 compute-0 sudo[141791]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vgcrfysnhycandljquexfdtxhxpfxshy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168035.1867766-360-225581550497050/AnsiballZ_systemd.py'
Jan 23 11:33:55 compute-0 sudo[141791]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:33:55 compute-0 python3.9[141793]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 23 11:33:55 compute-0 sudo[141791]: pam_unix(sudo:session): session closed for user root
Jan 23 11:33:56 compute-0 sudo[141946]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dnatvqswyjykinncvfxdlvhvfypypsdw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168036.11349-360-4213042210719/AnsiballZ_systemd.py'
Jan 23 11:33:56 compute-0 sudo[141946]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:33:56 compute-0 python3.9[141948]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 23 11:33:56 compute-0 systemd[1]: Reloading.
Jan 23 11:33:56 compute-0 systemd-rc-local-generator[141979]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 11:33:56 compute-0 systemd-sysv-generator[141982]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 11:33:57 compute-0 sudo[141946]: pam_unix(sudo:session): session closed for user root
Jan 23 11:33:57 compute-0 sudo[142136]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-scemofcytnfxeseascvwieffypuxlwsx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168037.2576249-396-134235620141511/AnsiballZ_systemd.py'
Jan 23 11:33:57 compute-0 sudo[142136]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:33:57 compute-0 python3.9[142138]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 23 11:33:57 compute-0 systemd[1]: Reloading.
Jan 23 11:33:58 compute-0 systemd-rc-local-generator[142167]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 11:33:58 compute-0 systemd-sysv-generator[142171]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 11:33:58 compute-0 systemd[1]: Listening on libvirt proxy daemon socket.
Jan 23 11:33:58 compute-0 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Jan 23 11:33:58 compute-0 sudo[142136]: pam_unix(sudo:session): session closed for user root
Jan 23 11:33:58 compute-0 sudo[142328]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qgqwdtlohghjvxqvdpfvgewowjbcejfl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168038.383558-404-126616693579033/AnsiballZ_systemd.py'
Jan 23 11:33:58 compute-0 sudo[142328]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:33:59 compute-0 python3.9[142330]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 23 11:33:59 compute-0 sudo[142328]: pam_unix(sudo:session): session closed for user root
Jan 23 11:33:59 compute-0 sudo[142483]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dkaklvmuxvlsvloneqthgsfywhybcvgg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168039.2929132-404-252743089713343/AnsiballZ_systemd.py'
Jan 23 11:33:59 compute-0 sudo[142483]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:33:59 compute-0 python3.9[142485]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 23 11:33:59 compute-0 sudo[142483]: pam_unix(sudo:session): session closed for user root
Jan 23 11:34:00 compute-0 sudo[142638]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ttqtsaemivnkrfdysexnrcpadptxrrol ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168040.128946-404-67540999414432/AnsiballZ_systemd.py'
Jan 23 11:34:00 compute-0 sudo[142638]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:34:00 compute-0 python3.9[142640]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 23 11:34:00 compute-0 sudo[142638]: pam_unix(sudo:session): session closed for user root
Jan 23 11:34:00 compute-0 podman[142642]: 2026-01-23 11:34:00.8425705 +0000 UTC m=+0.057095367 container health_status d96827cd9c29e53bbdf4cef10942608e4ba405294733072b4aa624c0238e2ed8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 11:34:01 compute-0 sudo[142812]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wfjoonzvqtgxnshlplpxkpvvxymzjvnn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168040.9565384-404-8832628522384/AnsiballZ_systemd.py'
Jan 23 11:34:01 compute-0 sudo[142812]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:34:01 compute-0 python3.9[142814]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 23 11:34:01 compute-0 sudo[142812]: pam_unix(sudo:session): session closed for user root
Jan 23 11:34:01 compute-0 sudo[142981]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fvqgfsodsrrzlzqqrisvabfvdrvmaxnh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168041.7127547-404-251250754817200/AnsiballZ_systemd.py'
Jan 23 11:34:01 compute-0 sudo[142981]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:34:02 compute-0 podman[142941]: 2026-01-23 11:34:02.054004355 +0000 UTC m=+0.112428071 container health_status 1cc877fed4914980324cf4c0d6ba23743fd113442cee4d49cc1a59e402757170 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS)
Jan 23 11:34:02 compute-0 python3.9[142991]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 23 11:34:02 compute-0 sudo[142981]: pam_unix(sudo:session): session closed for user root
Jan 23 11:34:02 compute-0 sudo[143151]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iuhnxwpsjvlzdbrjdykeyuwocvvpfdnk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168042.4547064-404-243099777358209/AnsiballZ_systemd.py'
Jan 23 11:34:02 compute-0 sudo[143151]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:34:02 compute-0 python3.9[143153]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 23 11:34:03 compute-0 sudo[143151]: pam_unix(sudo:session): session closed for user root
Jan 23 11:34:03 compute-0 sudo[143306]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-klrxvzybcacuzrntvuawpszktfzlhkhh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168043.1767426-404-228751257577114/AnsiballZ_systemd.py'
Jan 23 11:34:03 compute-0 sudo[143306]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:34:03 compute-0 python3.9[143308]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 23 11:34:03 compute-0 sudo[143306]: pam_unix(sudo:session): session closed for user root
Jan 23 11:34:04 compute-0 sudo[143461]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jnjcjbqlrrbwsyjgqgoerqdyczpqzhzv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168044.0778155-404-3505807455197/AnsiballZ_systemd.py'
Jan 23 11:34:04 compute-0 sudo[143461]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:34:04 compute-0 python3.9[143463]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 23 11:34:05 compute-0 sudo[143461]: pam_unix(sudo:session): session closed for user root
Jan 23 11:34:06 compute-0 sudo[143616]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fgpgipvehdeimupvaefngnfkfqyjudqs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168045.84393-404-271005218753948/AnsiballZ_systemd.py'
Jan 23 11:34:06 compute-0 sudo[143616]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:34:06 compute-0 python3.9[143618]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 23 11:34:06 compute-0 sudo[143616]: pam_unix(sudo:session): session closed for user root
Jan 23 11:34:06 compute-0 sudo[143771]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vnhklmgyidlorvrhrljrpxxafxylgoro ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168046.6068578-404-19229676463277/AnsiballZ_systemd.py'
Jan 23 11:34:06 compute-0 sudo[143771]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:34:07 compute-0 python3.9[143773]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 23 11:34:07 compute-0 sudo[143771]: pam_unix(sudo:session): session closed for user root
Jan 23 11:34:07 compute-0 sudo[143926]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ggfmrebxcaikoitjydcyhczrqiltoehh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168047.4148638-404-127822989105309/AnsiballZ_systemd.py'
Jan 23 11:34:07 compute-0 sudo[143926]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:34:07 compute-0 python3.9[143928]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 23 11:34:08 compute-0 sudo[143926]: pam_unix(sudo:session): session closed for user root
Jan 23 11:34:08 compute-0 sudo[144081]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yvdvinvdamydiaeaegqbqnbpymelnrem ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168048.2076964-404-167693840803785/AnsiballZ_systemd.py'
Jan 23 11:34:08 compute-0 sudo[144081]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:34:08 compute-0 python3.9[144083]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 23 11:34:08 compute-0 sudo[144081]: pam_unix(sudo:session): session closed for user root
Jan 23 11:34:09 compute-0 sudo[144236]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ckyfkxhyxhftxkbvnxwzenrhshgheuac ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168048.9608688-404-46065567483579/AnsiballZ_systemd.py'
Jan 23 11:34:09 compute-0 sudo[144236]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:34:09 compute-0 python3.9[144238]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 23 11:34:09 compute-0 sudo[144236]: pam_unix(sudo:session): session closed for user root
Jan 23 11:34:09 compute-0 sudo[144391]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dnzrisvphjcguodsbvegtgxwxqtwzsng ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168049.6883585-404-42234181814596/AnsiballZ_systemd.py'
Jan 23 11:34:09 compute-0 sudo[144391]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:34:10 compute-0 python3.9[144393]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 23 11:34:10 compute-0 sudo[144391]: pam_unix(sudo:session): session closed for user root
Jan 23 11:34:10 compute-0 sudo[144546]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hzizqdnxdjoozulnlwpttyycfbtdryit ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168050.6665533-506-110123519791942/AnsiballZ_file.py'
Jan 23 11:34:10 compute-0 sudo[144546]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:34:11 compute-0 python3.9[144548]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 23 11:34:11 compute-0 sudo[144546]: pam_unix(sudo:session): session closed for user root
Jan 23 11:34:11 compute-0 sudo[144698]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-agytuxsveaxhyfhupuodvczoiraenbkv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168051.2291346-506-53884423948506/AnsiballZ_file.py'
Jan 23 11:34:11 compute-0 sudo[144698]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:34:11 compute-0 python3.9[144700]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 23 11:34:11 compute-0 sudo[144698]: pam_unix(sudo:session): session closed for user root
Jan 23 11:34:12 compute-0 sudo[144850]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mzuywstudfxntgebhzlcyoxqwifadxnu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168051.8550391-506-143971568894048/AnsiballZ_file.py'
Jan 23 11:34:12 compute-0 sudo[144850]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:34:12 compute-0 python3.9[144852]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 11:34:12 compute-0 sudo[144850]: pam_unix(sudo:session): session closed for user root
Jan 23 11:34:12 compute-0 sudo[145002]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rkvbklxrtvmrnittnzohdytaogcqjlam ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168052.43562-506-13588328549477/AnsiballZ_file.py'
Jan 23 11:34:12 compute-0 sudo[145002]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:34:12 compute-0 python3.9[145004]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 11:34:12 compute-0 sudo[145002]: pam_unix(sudo:session): session closed for user root
Jan 23 11:34:13 compute-0 sudo[145154]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xgilpiyeahtxzusbdratmwulgnooopog ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168053.0085485-506-236664307311954/AnsiballZ_file.py'
Jan 23 11:34:13 compute-0 sudo[145154]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:34:13 compute-0 python3.9[145156]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 11:34:13 compute-0 sudo[145154]: pam_unix(sudo:session): session closed for user root
Jan 23 11:34:13 compute-0 sudo[145306]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ntunvwltoiuhrhturywpaalfwbyvvzzs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168053.5576317-506-69664234079300/AnsiballZ_file.py'
Jan 23 11:34:13 compute-0 sudo[145306]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:34:14 compute-0 python3.9[145308]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 23 11:34:14 compute-0 sudo[145306]: pam_unix(sudo:session): session closed for user root
Jan 23 11:34:14 compute-0 python3.9[145458]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 11:34:15 compute-0 sudo[145608]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tsdjagauxiedymwfcrqtsxgpkcrrbcqr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168055.059878-557-257463656117988/AnsiballZ_stat.py'
Jan 23 11:34:15 compute-0 sudo[145608]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:34:15 compute-0 python3.9[145610]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:34:15 compute-0 sudo[145608]: pam_unix(sudo:session): session closed for user root
Jan 23 11:34:16 compute-0 sudo[145733]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vsunkrqczirsnzqryjiahibsdnbupotm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168055.059878-557-257463656117988/AnsiballZ_copy.py'
Jan 23 11:34:16 compute-0 sudo[145733]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:34:16 compute-0 python3.9[145735]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769168055.059878-557-257463656117988/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:34:16 compute-0 sudo[145733]: pam_unix(sudo:session): session closed for user root
Jan 23 11:34:16 compute-0 sudo[145885]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hgphcsaizxgcalbpfahmbhgjwaibezgm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168056.489326-557-96995907197787/AnsiballZ_stat.py'
Jan 23 11:34:16 compute-0 sudo[145885]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:34:16 compute-0 python3.9[145887]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:34:16 compute-0 sudo[145885]: pam_unix(sudo:session): session closed for user root
Jan 23 11:34:17 compute-0 sudo[146010]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qszdzihnotwsufpebwkmeaafngxgujoa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168056.489326-557-96995907197787/AnsiballZ_copy.py'
Jan 23 11:34:17 compute-0 sudo[146010]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:34:17 compute-0 python3.9[146012]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769168056.489326-557-96995907197787/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:34:17 compute-0 sudo[146010]: pam_unix(sudo:session): session closed for user root
Jan 23 11:34:17 compute-0 sudo[146162]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wrvutguiuibckwojofywlvojhveinhtl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168057.559548-557-254482393444005/AnsiballZ_stat.py'
Jan 23 11:34:17 compute-0 sudo[146162]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:34:17 compute-0 python3.9[146164]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:34:18 compute-0 sudo[146162]: pam_unix(sudo:session): session closed for user root
Jan 23 11:34:18 compute-0 sudo[146287]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fpnjzlssslclwtqbyeaxxhnyvbyhehwr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168057.559548-557-254482393444005/AnsiballZ_copy.py'
Jan 23 11:34:18 compute-0 sudo[146287]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:34:18 compute-0 python3.9[146289]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769168057.559548-557-254482393444005/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:34:18 compute-0 sudo[146287]: pam_unix(sudo:session): session closed for user root
Jan 23 11:34:18 compute-0 sudo[146439]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wafwkkfonvgiwzrwqmugetdpesxifygn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168058.646976-557-21770818253386/AnsiballZ_stat.py'
Jan 23 11:34:18 compute-0 sudo[146439]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:34:19 compute-0 python3.9[146441]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:34:19 compute-0 sudo[146439]: pam_unix(sudo:session): session closed for user root
Jan 23 11:34:19 compute-0 sudo[146564]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-upddconmezjxpzfvbkcltioaxuydihkx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168058.646976-557-21770818253386/AnsiballZ_copy.py'
Jan 23 11:34:19 compute-0 sudo[146564]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:34:19 compute-0 python3.9[146566]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769168058.646976-557-21770818253386/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:34:19 compute-0 sudo[146564]: pam_unix(sudo:session): session closed for user root
Jan 23 11:34:20 compute-0 sudo[146716]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uvfiqtmosuegibeiimwfmfvtxqqazhoq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168059.8520272-557-18066491764320/AnsiballZ_stat.py'
Jan 23 11:34:20 compute-0 sudo[146716]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:34:20 compute-0 python3.9[146718]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:34:20 compute-0 sudo[146716]: pam_unix(sudo:session): session closed for user root
Jan 23 11:34:20 compute-0 sudo[146841]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-htgwpemyshkmhzqvpbwbizwlreoqmztx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168059.8520272-557-18066491764320/AnsiballZ_copy.py'
Jan 23 11:34:20 compute-0 sudo[146841]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:34:20 compute-0 python3.9[146843]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769168059.8520272-557-18066491764320/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:34:20 compute-0 sudo[146841]: pam_unix(sudo:session): session closed for user root
Jan 23 11:34:21 compute-0 sudo[146993]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qgktqrndkisynvjanwtvtyapbalcmkpx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168060.9571388-557-274045817121569/AnsiballZ_stat.py'
Jan 23 11:34:21 compute-0 sudo[146993]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:34:21 compute-0 python3.9[146995]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:34:21 compute-0 sudo[146993]: pam_unix(sudo:session): session closed for user root
Jan 23 11:34:21 compute-0 sudo[147118]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-shnwlnnpswnzlouavlyvbvmkwbbqeeqt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168060.9571388-557-274045817121569/AnsiballZ_copy.py'
Jan 23 11:34:21 compute-0 sudo[147118]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:34:21 compute-0 python3.9[147120]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769168060.9571388-557-274045817121569/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:34:21 compute-0 sudo[147118]: pam_unix(sudo:session): session closed for user root
Jan 23 11:34:22 compute-0 sudo[147270]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-enydvdwgwtvzeddbdnmwppfkpowhxgnr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168062.070416-557-155167184988091/AnsiballZ_stat.py'
Jan 23 11:34:22 compute-0 sudo[147270]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:34:22 compute-0 python3.9[147272]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:34:22 compute-0 sudo[147270]: pam_unix(sudo:session): session closed for user root
Jan 23 11:34:22 compute-0 sudo[147393]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yozwqaxqpkmdnzdmlcpoxwkzlyabekfl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168062.070416-557-155167184988091/AnsiballZ_copy.py'
Jan 23 11:34:22 compute-0 sudo[147393]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:34:22 compute-0 python3.9[147395]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769168062.070416-557-155167184988091/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:34:22 compute-0 sudo[147393]: pam_unix(sudo:session): session closed for user root
Jan 23 11:34:23 compute-0 sudo[147545]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aojzklcaeppsxgcejcyuvcslijqjmwun ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168063.0836856-557-32091486528280/AnsiballZ_stat.py'
Jan 23 11:34:23 compute-0 sudo[147545]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:34:23 compute-0 python3.9[147547]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:34:23 compute-0 sudo[147545]: pam_unix(sudo:session): session closed for user root
Jan 23 11:34:23 compute-0 sudo[147670]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-crkfvrspgxlvztmqbocysvwlfiebvufm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168063.0836856-557-32091486528280/AnsiballZ_copy.py'
Jan 23 11:34:23 compute-0 sudo[147670]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:34:24 compute-0 python3.9[147672]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769168063.0836856-557-32091486528280/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:34:24 compute-0 sudo[147670]: pam_unix(sudo:session): session closed for user root
Jan 23 11:34:24 compute-0 sudo[147822]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bdnxckffvgxvqhlgboszglyrtjmplbal ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168064.2116046-670-88569522694419/AnsiballZ_command.py'
Jan 23 11:34:24 compute-0 sudo[147822]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:34:24 compute-0 python3.9[147824]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Jan 23 11:34:24 compute-0 sudo[147822]: pam_unix(sudo:session): session closed for user root
Jan 23 11:34:25 compute-0 sudo[147975]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kppcoeupildtzeaazpwdugxlnmpnsvvj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168064.9635246-679-217659104580344/AnsiballZ_file.py'
Jan 23 11:34:25 compute-0 sudo[147975]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:34:25 compute-0 python3.9[147977]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:34:25 compute-0 sudo[147975]: pam_unix(sudo:session): session closed for user root
Jan 23 11:34:25 compute-0 sudo[148127]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yftbrzvihglfalaraqalwadftbzhwnrq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168065.491128-679-156681494377167/AnsiballZ_file.py'
Jan 23 11:34:25 compute-0 sudo[148127]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:34:25 compute-0 python3.9[148129]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:34:25 compute-0 sudo[148127]: pam_unix(sudo:session): session closed for user root
Jan 23 11:34:26 compute-0 sudo[148279]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oixnspeypumljbufmgjihwkgnwrprilm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168066.0782685-679-250493606511040/AnsiballZ_file.py'
Jan 23 11:34:26 compute-0 sudo[148279]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:34:26 compute-0 python3.9[148281]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:34:26 compute-0 sudo[148279]: pam_unix(sudo:session): session closed for user root
Jan 23 11:34:26 compute-0 sudo[148431]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tyxqtgpimgpmuthjyrgxtylwfthxlvrj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168066.6301882-679-136173774677897/AnsiballZ_file.py'
Jan 23 11:34:26 compute-0 sudo[148431]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:34:27 compute-0 python3.9[148433]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:34:27 compute-0 sudo[148431]: pam_unix(sudo:session): session closed for user root
Jan 23 11:34:27 compute-0 sudo[148583]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hbbwcertriikundeyobgrygimdxurtid ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168067.1875446-679-100572605131421/AnsiballZ_file.py'
Jan 23 11:34:27 compute-0 sudo[148583]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:34:27 compute-0 python3.9[148585]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:34:27 compute-0 sudo[148583]: pam_unix(sudo:session): session closed for user root
Jan 23 11:34:28 compute-0 sudo[148735]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-urujsleasgulvqvxkkvmsmuflscblgoz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168067.8053684-679-195908788567988/AnsiballZ_file.py'
Jan 23 11:34:28 compute-0 sudo[148735]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:34:28 compute-0 python3.9[148737]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:34:28 compute-0 sudo[148735]: pam_unix(sudo:session): session closed for user root
Jan 23 11:34:28 compute-0 sudo[148887]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fkvobcokoskmnoxaokgorceakcfakuna ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168068.388661-679-11963040950386/AnsiballZ_file.py'
Jan 23 11:34:28 compute-0 sudo[148887]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:34:28 compute-0 python3.9[148889]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:34:28 compute-0 sudo[148887]: pam_unix(sudo:session): session closed for user root
Jan 23 11:34:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:34:29.073 106832 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 11:34:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:34:29.074 106832 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 11:34:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:34:29.074 106832 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 11:34:29 compute-0 sudo[149039]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rawfsamveiqcqeadmtcgrivomgaitird ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168069.019838-679-104931685764824/AnsiballZ_file.py'
Jan 23 11:34:29 compute-0 sudo[149039]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:34:29 compute-0 python3.9[149041]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:34:29 compute-0 sudo[149039]: pam_unix(sudo:session): session closed for user root
Jan 23 11:34:29 compute-0 sudo[149191]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rgqrnryltjoyspgcuuxeuisztpetyxvw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168069.6355906-679-222647562114473/AnsiballZ_file.py'
Jan 23 11:34:29 compute-0 sudo[149191]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:34:30 compute-0 python3.9[149193]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:34:30 compute-0 sudo[149191]: pam_unix(sudo:session): session closed for user root
Jan 23 11:34:30 compute-0 sudo[149343]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-naauwqltiyezbqltnriiakaimtjxdnci ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168070.291699-679-85889344215652/AnsiballZ_file.py'
Jan 23 11:34:30 compute-0 sudo[149343]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:34:30 compute-0 python3.9[149345]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:34:30 compute-0 sudo[149343]: pam_unix(sudo:session): session closed for user root
Jan 23 11:34:31 compute-0 sudo[149511]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-linukfztcfydteqkabzenthrcgphasrx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168070.9046023-679-269944409701683/AnsiballZ_file.py'
Jan 23 11:34:31 compute-0 sudo[149511]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:34:31 compute-0 podman[149469]: 2026-01-23 11:34:31.207262184 +0000 UTC m=+0.059099544 container health_status d96827cd9c29e53bbdf4cef10942608e4ba405294733072b4aa624c0238e2ed8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 23 11:34:31 compute-0 python3.9[149516]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:34:31 compute-0 sudo[149511]: pam_unix(sudo:session): session closed for user root
Jan 23 11:34:31 compute-0 sudo[149666]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ejwajzdraqrudgtnxxeqizhenwbqmgys ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168071.5817533-679-275002735217158/AnsiballZ_file.py'
Jan 23 11:34:31 compute-0 sudo[149666]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:34:32 compute-0 python3.9[149668]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:34:32 compute-0 sudo[149666]: pam_unix(sudo:session): session closed for user root
Jan 23 11:34:32 compute-0 sudo[149837]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjcceoovvwwmbonkqkzoplunfaspfdws ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168072.206815-679-195905297546357/AnsiballZ_file.py'
Jan 23 11:34:32 compute-0 sudo[149837]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:34:32 compute-0 podman[149792]: 2026-01-23 11:34:32.612351477 +0000 UTC m=+0.103320493 container health_status 1cc877fed4914980324cf4c0d6ba23743fd113442cee4d49cc1a59e402757170 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, managed_by=edpm_ansible)
Jan 23 11:34:32 compute-0 python3.9[149843]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:34:32 compute-0 sudo[149837]: pam_unix(sudo:session): session closed for user root
Jan 23 11:34:33 compute-0 sudo[149996]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvhqjqizysxmzivaydslzzihfnitiqut ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168072.9171324-679-213292980117405/AnsiballZ_file.py'
Jan 23 11:34:33 compute-0 sudo[149996]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:34:33 compute-0 python3.9[149998]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:34:33 compute-0 sudo[149996]: pam_unix(sudo:session): session closed for user root
Jan 23 11:34:34 compute-0 sudo[150148]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uxymsttnjynqqyvzqzvtdwjdsqajeucw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168073.7663696-778-151442331033159/AnsiballZ_stat.py'
Jan 23 11:34:34 compute-0 sudo[150148]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:34:34 compute-0 python3.9[150150]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:34:34 compute-0 sudo[150148]: pam_unix(sudo:session): session closed for user root
Jan 23 11:34:34 compute-0 sudo[150271]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zgfeladxsbalxylvtskkekwttnrkpjzv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168073.7663696-778-151442331033159/AnsiballZ_copy.py'
Jan 23 11:34:34 compute-0 sudo[150271]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:34:34 compute-0 python3.9[150273]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769168073.7663696-778-151442331033159/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:34:34 compute-0 sudo[150271]: pam_unix(sudo:session): session closed for user root
Jan 23 11:34:35 compute-0 sudo[150423]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hvynijuaffemqzshjjjiyyipsekonrks ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168074.9927154-778-271231448666379/AnsiballZ_stat.py'
Jan 23 11:34:35 compute-0 sudo[150423]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:34:35 compute-0 python3.9[150425]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:34:35 compute-0 sudo[150423]: pam_unix(sudo:session): session closed for user root
Jan 23 11:34:35 compute-0 sudo[150546]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zqjljyfaczwsmqzecgmmciajjbsmntcw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168074.9927154-778-271231448666379/AnsiballZ_copy.py'
Jan 23 11:34:35 compute-0 sudo[150546]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:34:36 compute-0 python3.9[150548]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769168074.9927154-778-271231448666379/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:34:36 compute-0 sudo[150546]: pam_unix(sudo:session): session closed for user root
Jan 23 11:34:36 compute-0 sudo[150700]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hiyexxyalyfwtbrcursdsjbjnamqoptf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168076.2067184-778-257127471499544/AnsiballZ_stat.py'
Jan 23 11:34:36 compute-0 sudo[150700]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:34:36 compute-0 python3.9[150702]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:34:36 compute-0 sudo[150700]: pam_unix(sudo:session): session closed for user root
Jan 23 11:34:36 compute-0 sshd-session[150593]: Invalid user solv from 193.32.162.146 port 46888
Jan 23 11:34:36 compute-0 sshd-session[150593]: Connection closed by invalid user solv 193.32.162.146 port 46888 [preauth]
Jan 23 11:34:36 compute-0 sudo[150823]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iodspeukugnlxnkiwmuyeerpfrdtboah ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168076.2067184-778-257127471499544/AnsiballZ_copy.py'
Jan 23 11:34:36 compute-0 sudo[150823]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:34:37 compute-0 python3.9[150825]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769168076.2067184-778-257127471499544/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:34:37 compute-0 sudo[150823]: pam_unix(sudo:session): session closed for user root
Jan 23 11:34:37 compute-0 sudo[150975]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wiacwbpeloqiufbiwfusmnyuxfqcymaw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168077.3238885-778-280738132681195/AnsiballZ_stat.py'
Jan 23 11:34:37 compute-0 sudo[150975]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:34:37 compute-0 python3.9[150977]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:34:37 compute-0 sudo[150975]: pam_unix(sudo:session): session closed for user root
Jan 23 11:34:38 compute-0 sudo[151098]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zacjomswfqjqljwmgrbgkudghahjbsnl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168077.3238885-778-280738132681195/AnsiballZ_copy.py'
Jan 23 11:34:38 compute-0 sudo[151098]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:34:38 compute-0 python3.9[151100]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769168077.3238885-778-280738132681195/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:34:38 compute-0 sudo[151098]: pam_unix(sudo:session): session closed for user root
Jan 23 11:34:38 compute-0 sudo[151250]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ikkrzwdywgljpwxxexnvkmxfuqtjzbiv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168078.553054-778-253773476413468/AnsiballZ_stat.py'
Jan 23 11:34:38 compute-0 sudo[151250]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:34:39 compute-0 python3.9[151252]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:34:39 compute-0 sudo[151250]: pam_unix(sudo:session): session closed for user root
Jan 23 11:34:39 compute-0 sudo[151373]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rqhrshdxbyymlcbflmuqrhqditjaiteo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168078.553054-778-253773476413468/AnsiballZ_copy.py'
Jan 23 11:34:39 compute-0 sudo[151373]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:34:39 compute-0 python3.9[151375]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769168078.553054-778-253773476413468/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:34:39 compute-0 sudo[151373]: pam_unix(sudo:session): session closed for user root
Jan 23 11:34:39 compute-0 sudo[151525]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sdrkkwkfrxtsnlpfvdlaeicgpcofqloo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168079.6457286-778-21192700304451/AnsiballZ_stat.py'
Jan 23 11:34:39 compute-0 sudo[151525]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:34:40 compute-0 python3.9[151527]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:34:40 compute-0 sudo[151525]: pam_unix(sudo:session): session closed for user root
Jan 23 11:34:40 compute-0 sudo[151648]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ectqyceeqkeginegvkdugyyauhfhcvjz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168079.6457286-778-21192700304451/AnsiballZ_copy.py'
Jan 23 11:34:40 compute-0 sudo[151648]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:34:40 compute-0 python3.9[151650]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769168079.6457286-778-21192700304451/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:34:40 compute-0 sudo[151648]: pam_unix(sudo:session): session closed for user root
Jan 23 11:34:41 compute-0 sudo[151800]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-esufkmmhanlmqgmhjbpixsxwtmivsdmv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168080.803072-778-165341761834286/AnsiballZ_stat.py'
Jan 23 11:34:41 compute-0 sudo[151800]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:34:41 compute-0 python3.9[151802]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:34:41 compute-0 sudo[151800]: pam_unix(sudo:session): session closed for user root
Jan 23 11:34:41 compute-0 sudo[151923]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ylsdbnetdxhiaegaidbzwyoaxwopncdb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168080.803072-778-165341761834286/AnsiballZ_copy.py'
Jan 23 11:34:41 compute-0 sudo[151923]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:34:41 compute-0 python3.9[151925]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769168080.803072-778-165341761834286/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:34:41 compute-0 sudo[151923]: pam_unix(sudo:session): session closed for user root
Jan 23 11:34:42 compute-0 sudo[152075]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-noajwgnaohgknleaolleedhcxmhbsqle ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168082.1436565-778-217212675530176/AnsiballZ_stat.py'
Jan 23 11:34:42 compute-0 sudo[152075]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:34:42 compute-0 python3.9[152077]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:34:42 compute-0 sudo[152075]: pam_unix(sudo:session): session closed for user root
Jan 23 11:34:43 compute-0 sudo[152198]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ahrxmfcqnkkkfpiazldibkrkqgstylgh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168082.1436565-778-217212675530176/AnsiballZ_copy.py'
Jan 23 11:34:43 compute-0 sudo[152198]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:34:43 compute-0 python3.9[152200]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769168082.1436565-778-217212675530176/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:34:43 compute-0 sudo[152198]: pam_unix(sudo:session): session closed for user root
Jan 23 11:34:43 compute-0 sudo[152350]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-muyhroinghxocwksfukmblhbuzxmcegc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168083.3968656-778-90877500574020/AnsiballZ_stat.py'
Jan 23 11:34:43 compute-0 sudo[152350]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:34:43 compute-0 python3.9[152352]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:34:43 compute-0 sudo[152350]: pam_unix(sudo:session): session closed for user root
Jan 23 11:34:44 compute-0 sudo[152473]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djhbsnqosfswjyfkulmjjdvjqfsfeukc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168083.3968656-778-90877500574020/AnsiballZ_copy.py'
Jan 23 11:34:44 compute-0 sudo[152473]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:34:44 compute-0 python3.9[152475]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769168083.3968656-778-90877500574020/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:34:44 compute-0 sudo[152473]: pam_unix(sudo:session): session closed for user root
Jan 23 11:34:44 compute-0 sudo[152625]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-flkingsvwpydwsygbjnxdxyztalsguni ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168084.553779-778-134245492288930/AnsiballZ_stat.py'
Jan 23 11:34:44 compute-0 sudo[152625]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:34:45 compute-0 python3.9[152627]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:34:45 compute-0 sudo[152625]: pam_unix(sudo:session): session closed for user root
Jan 23 11:34:45 compute-0 sudo[152748]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zstqwhmhabufdefihyltzmzfmbvpqaja ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168084.553779-778-134245492288930/AnsiballZ_copy.py'
Jan 23 11:34:45 compute-0 sudo[152748]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:34:45 compute-0 python3.9[152750]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769168084.553779-778-134245492288930/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:34:45 compute-0 sudo[152748]: pam_unix(sudo:session): session closed for user root
Jan 23 11:34:45 compute-0 sudo[152900]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fkofczqofwxzhukesaogwbtrsuydtfpm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168085.7285008-778-70679688821815/AnsiballZ_stat.py'
Jan 23 11:34:45 compute-0 sudo[152900]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:34:46 compute-0 python3.9[152902]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:34:46 compute-0 sudo[152900]: pam_unix(sudo:session): session closed for user root
Jan 23 11:34:46 compute-0 sudo[153023]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ipdkjyxwiiwtksghkfyslhkhvyqvbdlm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168085.7285008-778-70679688821815/AnsiballZ_copy.py'
Jan 23 11:34:46 compute-0 sudo[153023]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:34:46 compute-0 python3.9[153025]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769168085.7285008-778-70679688821815/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:34:46 compute-0 sudo[153023]: pam_unix(sudo:session): session closed for user root
Jan 23 11:34:47 compute-0 sudo[153175]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wjubmofljgkskfjjvauxajflxwceoytr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168086.819442-778-260723667495551/AnsiballZ_stat.py'
Jan 23 11:34:47 compute-0 sudo[153175]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:34:47 compute-0 python3.9[153177]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:34:47 compute-0 sudo[153175]: pam_unix(sudo:session): session closed for user root
Jan 23 11:34:47 compute-0 sudo[153298]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vewzluibkkbahyhfqdcqwrpgzjxinlxs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168086.819442-778-260723667495551/AnsiballZ_copy.py'
Jan 23 11:34:47 compute-0 sudo[153298]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:34:47 compute-0 python3.9[153300]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769168086.819442-778-260723667495551/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:34:47 compute-0 sudo[153298]: pam_unix(sudo:session): session closed for user root
Jan 23 11:34:48 compute-0 sudo[153450]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wyglqpdmhmjhaelgfqtywyvkyqnrgzmv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168088.1400373-778-189946038726601/AnsiballZ_stat.py'
Jan 23 11:34:48 compute-0 sudo[153450]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:34:48 compute-0 python3.9[153452]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:34:48 compute-0 sudo[153450]: pam_unix(sudo:session): session closed for user root
Jan 23 11:34:49 compute-0 sudo[153573]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hhmrisptqwvokxxnlthybecxquhigwtg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168088.1400373-778-189946038726601/AnsiballZ_copy.py'
Jan 23 11:34:49 compute-0 sudo[153573]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:34:49 compute-0 python3.9[153575]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769168088.1400373-778-189946038726601/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:34:49 compute-0 sudo[153573]: pam_unix(sudo:session): session closed for user root
Jan 23 11:34:49 compute-0 sudo[153725]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qhcojqjuithmlhgjvimlmhtkcxvukjva ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168089.4596975-778-156586004652141/AnsiballZ_stat.py'
Jan 23 11:34:49 compute-0 sudo[153725]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:34:49 compute-0 python3.9[153727]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:34:49 compute-0 sudo[153725]: pam_unix(sudo:session): session closed for user root
Jan 23 11:34:50 compute-0 sudo[153848]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gspnsehiwqlomoysoduuoalpexfmmeat ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168089.4596975-778-156586004652141/AnsiballZ_copy.py'
Jan 23 11:34:50 compute-0 sudo[153848]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:34:50 compute-0 python3.9[153850]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769168089.4596975-778-156586004652141/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:34:50 compute-0 sudo[153848]: pam_unix(sudo:session): session closed for user root
Jan 23 11:34:51 compute-0 python3.9[154000]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ls -lRZ /run/libvirt | grep -E ':container_\S+_t'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 11:34:51 compute-0 sudo[154153]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uhmusoewpakudsfzfreetpaautpqrzxb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168091.3879344-984-232745291201753/AnsiballZ_seboolean.py'
Jan 23 11:34:51 compute-0 sudo[154153]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:34:52 compute-0 python3.9[154155]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Jan 23 11:34:53 compute-0 sudo[154153]: pam_unix(sudo:session): session closed for user root
Jan 23 11:34:53 compute-0 sudo[154309]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sqhbafnapycmosnrsheseqrimotjlwtz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168093.33423-992-273972547437564/AnsiballZ_copy.py'
Jan 23 11:34:53 compute-0 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Jan 23 11:34:53 compute-0 sudo[154309]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:34:53 compute-0 python3.9[154311]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:34:53 compute-0 sudo[154309]: pam_unix(sudo:session): session closed for user root
Jan 23 11:34:54 compute-0 sudo[154461]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-icguyikjxeybuvxpaaorfaidtjnhmkbv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168093.9657216-992-94111308154208/AnsiballZ_copy.py'
Jan 23 11:34:54 compute-0 sudo[154461]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:34:54 compute-0 python3.9[154463]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:34:54 compute-0 sudo[154461]: pam_unix(sudo:session): session closed for user root
Jan 23 11:34:54 compute-0 sudo[154613]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sxhwzpnsjtggrlihztgthpslrekzidxt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168094.5587597-992-187375347136594/AnsiballZ_copy.py'
Jan 23 11:34:54 compute-0 sudo[154613]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:34:54 compute-0 python3.9[154615]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:34:55 compute-0 sudo[154613]: pam_unix(sudo:session): session closed for user root
Jan 23 11:34:55 compute-0 sudo[154765]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-woejdwpjvoubfwzqpahmirkjalzyqibk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168095.1224682-992-91888153140469/AnsiballZ_copy.py'
Jan 23 11:34:55 compute-0 sudo[154765]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:34:55 compute-0 python3.9[154767]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:34:55 compute-0 sudo[154765]: pam_unix(sudo:session): session closed for user root
Jan 23 11:34:55 compute-0 sudo[154917]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dxrrrqximhoztlmxqxaenqredqsiqcxm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168095.7254512-992-165193739043002/AnsiballZ_copy.py'
Jan 23 11:34:55 compute-0 sudo[154917]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:34:56 compute-0 python3.9[154919]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:34:56 compute-0 sudo[154917]: pam_unix(sudo:session): session closed for user root
Jan 23 11:34:56 compute-0 sudo[155069]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aekfokccpttmmfdtwggnhsozjigmstyf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168096.3675935-1028-81606322984544/AnsiballZ_copy.py'
Jan 23 11:34:56 compute-0 sudo[155069]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:34:56 compute-0 python3.9[155071]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:34:56 compute-0 sudo[155069]: pam_unix(sudo:session): session closed for user root
Jan 23 11:34:57 compute-0 sudo[155221]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wekzjtbqykfqhnpnkapdfejarvoidevu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168096.9612586-1028-67119259828639/AnsiballZ_copy.py'
Jan 23 11:34:57 compute-0 sudo[155221]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:34:57 compute-0 python3.9[155223]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:34:57 compute-0 sudo[155221]: pam_unix(sudo:session): session closed for user root
Jan 23 11:34:57 compute-0 sudo[155373]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rklwhjplvbaspuukwpogddwxitncdyba ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168097.5865788-1028-133480983043392/AnsiballZ_copy.py'
Jan 23 11:34:57 compute-0 sudo[155373]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:34:58 compute-0 python3.9[155375]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:34:58 compute-0 sudo[155373]: pam_unix(sudo:session): session closed for user root
Jan 23 11:34:58 compute-0 sudo[155525]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jcnywhigsvvrlexumfxjqmqycvvamfqe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168098.2090757-1028-74297131605032/AnsiballZ_copy.py'
Jan 23 11:34:58 compute-0 sudo[155525]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:34:58 compute-0 python3.9[155527]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:34:58 compute-0 sudo[155525]: pam_unix(sudo:session): session closed for user root
Jan 23 11:34:59 compute-0 sudo[155677]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nqgjbkncgajhawazdmcvfdzgabmwxzbw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168098.9195588-1028-242172833186260/AnsiballZ_copy.py'
Jan 23 11:34:59 compute-0 sudo[155677]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:34:59 compute-0 python3.9[155679]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:34:59 compute-0 sudo[155677]: pam_unix(sudo:session): session closed for user root
Jan 23 11:34:59 compute-0 sudo[155829]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bfesfngsbjhsltmfulowjpuhyhhcntfe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168099.6063616-1064-235849943685334/AnsiballZ_systemd.py'
Jan 23 11:34:59 compute-0 sudo[155829]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:35:00 compute-0 python3.9[155831]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 23 11:35:00 compute-0 systemd[1]: Reloading.
Jan 23 11:35:00 compute-0 systemd-rc-local-generator[155856]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 11:35:00 compute-0 systemd-sysv-generator[155859]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 11:35:00 compute-0 systemd[1]: Starting libvirt logging daemon socket...
Jan 23 11:35:00 compute-0 systemd[1]: Listening on libvirt logging daemon socket.
Jan 23 11:35:00 compute-0 systemd[1]: Starting libvirt logging daemon admin socket...
Jan 23 11:35:00 compute-0 systemd[1]: Listening on libvirt logging daemon admin socket.
Jan 23 11:35:00 compute-0 systemd[1]: Starting libvirt logging daemon...
Jan 23 11:35:00 compute-0 systemd[1]: Started libvirt logging daemon.
Jan 23 11:35:00 compute-0 sudo[155829]: pam_unix(sudo:session): session closed for user root
Jan 23 11:35:01 compute-0 sudo[156022]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qycrunzbljjchdsgzyltucgzzotkcxrj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168100.8669872-1064-28604621132268/AnsiballZ_systemd.py'
Jan 23 11:35:01 compute-0 sudo[156022]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:35:01 compute-0 python3.9[156024]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 23 11:35:01 compute-0 systemd[1]: Reloading.
Jan 23 11:35:01 compute-0 systemd-rc-local-generator[156065]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 11:35:01 compute-0 systemd-sysv-generator[156069]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 11:35:01 compute-0 podman[156026]: 2026-01-23 11:35:01.579669157 +0000 UTC m=+0.094435384 container health_status d96827cd9c29e53bbdf4cef10942608e4ba405294733072b4aa624c0238e2ed8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent)
Jan 23 11:35:01 compute-0 systemd[1]: Starting libvirt nodedev daemon socket...
Jan 23 11:35:01 compute-0 systemd[1]: Listening on libvirt nodedev daemon socket.
Jan 23 11:35:01 compute-0 systemd[1]: Starting libvirt nodedev daemon admin socket...
Jan 23 11:35:01 compute-0 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Jan 23 11:35:01 compute-0 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Jan 23 11:35:01 compute-0 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Jan 23 11:35:01 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Jan 23 11:35:01 compute-0 systemd[1]: Started libvirt nodedev daemon.
Jan 23 11:35:01 compute-0 sudo[156022]: pam_unix(sudo:session): session closed for user root
Jan 23 11:35:02 compute-0 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Jan 23 11:35:02 compute-0 sudo[156259]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dfpihysevhutsvetpbdagerknuspdikn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168101.9573467-1064-58753591351835/AnsiballZ_systemd.py'
Jan 23 11:35:02 compute-0 sudo[156259]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:35:02 compute-0 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Jan 23 11:35:02 compute-0 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Jan 23 11:35:02 compute-0 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Jan 23 11:35:02 compute-0 python3.9[156261]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 23 11:35:02 compute-0 systemd[1]: Reloading.
Jan 23 11:35:02 compute-0 systemd-sysv-generator[156298]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 11:35:02 compute-0 systemd-rc-local-generator[156292]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 11:35:02 compute-0 systemd[1]: Starting libvirt proxy daemon admin socket...
Jan 23 11:35:02 compute-0 systemd[1]: Starting libvirt proxy daemon read-only socket...
Jan 23 11:35:02 compute-0 systemd[1]: Listening on libvirt proxy daemon admin socket.
Jan 23 11:35:02 compute-0 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Jan 23 11:35:02 compute-0 systemd[1]: Starting libvirt proxy daemon...
Jan 23 11:35:02 compute-0 systemd[1]: Started libvirt proxy daemon.
Jan 23 11:35:02 compute-0 podman[156307]: 2026-01-23 11:35:02.885219006 +0000 UTC m=+0.081573783 container health_status 1cc877fed4914980324cf4c0d6ba23743fd113442cee4d49cc1a59e402757170 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 23 11:35:02 compute-0 sudo[156259]: pam_unix(sudo:session): session closed for user root
Jan 23 11:35:03 compute-0 setroubleshoot[156185]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l f514d5f0-0de3-4e53-8ea4-c3250658c2cc
Jan 23 11:35:03 compute-0 setroubleshoot[156185]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Jan 23 11:35:03 compute-0 setroubleshoot[156185]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l f514d5f0-0de3-4e53-8ea4-c3250658c2cc
Jan 23 11:35:03 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 23 11:35:03 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 23 11:35:03 compute-0 sudo[156506]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fbwpqtdkidklkzuqbeboeklkcksapsiw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168103.180785-1064-227433702212682/AnsiballZ_systemd.py'
Jan 23 11:35:03 compute-0 sudo[156506]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:35:03 compute-0 setroubleshoot[156185]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Jan 23 11:35:03 compute-0 python3.9[156509]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 23 11:35:03 compute-0 systemd[1]: Reloading.
Jan 23 11:35:03 compute-0 systemd-sysv-generator[156537]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 11:35:03 compute-0 systemd-rc-local-generator[156533]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 11:35:04 compute-0 systemd[1]: Listening on libvirt locking daemon socket.
Jan 23 11:35:04 compute-0 systemd[1]: Starting libvirt QEMU daemon socket...
Jan 23 11:35:04 compute-0 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Jan 23 11:35:04 compute-0 systemd[1]: Starting Virtual Machine and Container Registration Service...
Jan 23 11:35:04 compute-0 systemd[1]: Listening on libvirt QEMU daemon socket.
Jan 23 11:35:04 compute-0 systemd[1]: Starting libvirt QEMU daemon admin socket...
Jan 23 11:35:04 compute-0 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Jan 23 11:35:04 compute-0 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Jan 23 11:35:04 compute-0 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Jan 23 11:35:04 compute-0 systemd[1]: Started Virtual Machine and Container Registration Service.
Jan 23 11:35:04 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Jan 23 11:35:04 compute-0 systemd[1]: Started libvirt QEMU daemon.
Jan 23 11:35:04 compute-0 sudo[156506]: pam_unix(sudo:session): session closed for user root
Jan 23 11:35:04 compute-0 sudo[156723]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pbzikbwlhwqomuxprgpkkiifqizkrsbq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168104.4002993-1064-15535063096307/AnsiballZ_systemd.py'
Jan 23 11:35:04 compute-0 sudo[156723]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:35:04 compute-0 python3.9[156725]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 23 11:35:05 compute-0 systemd[1]: Reloading.
Jan 23 11:35:05 compute-0 systemd-sysv-generator[156754]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 11:35:05 compute-0 systemd-rc-local-generator[156746]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 11:35:06 compute-0 systemd[1]: Starting libvirt secret daemon socket...
Jan 23 11:35:06 compute-0 systemd[1]: Listening on libvirt secret daemon socket.
Jan 23 11:35:06 compute-0 systemd[1]: Starting libvirt secret daemon admin socket...
Jan 23 11:35:06 compute-0 systemd[1]: Starting libvirt secret daemon read-only socket...
Jan 23 11:35:06 compute-0 systemd[1]: Listening on libvirt secret daemon admin socket.
Jan 23 11:35:06 compute-0 systemd[1]: Listening on libvirt secret daemon read-only socket.
Jan 23 11:35:06 compute-0 systemd[1]: Starting libvirt secret daemon...
Jan 23 11:35:06 compute-0 systemd[1]: Started libvirt secret daemon.
Jan 23 11:35:06 compute-0 sudo[156723]: pam_unix(sudo:session): session closed for user root
Jan 23 11:35:06 compute-0 sudo[156936]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ihegpkbxuozqlztvhbexwpngpzadnexk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168106.6840634-1101-216358982976128/AnsiballZ_file.py'
Jan 23 11:35:06 compute-0 sudo[156936]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:35:07 compute-0 python3.9[156938]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:35:07 compute-0 sudo[156936]: pam_unix(sudo:session): session closed for user root
Jan 23 11:35:07 compute-0 sudo[157088]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nppjdscqpushnigdnlglttdglmqbjwtn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168107.3798776-1109-63553809635961/AnsiballZ_find.py'
Jan 23 11:35:07 compute-0 sudo[157088]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:35:07 compute-0 python3.9[157090]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 23 11:35:07 compute-0 sudo[157088]: pam_unix(sudo:session): session closed for user root
Jan 23 11:35:08 compute-0 sudo[157240]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fgbavoqvqjpjjbbifzysxgsusulwqzwi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168108.3195953-1123-70399056839474/AnsiballZ_stat.py'
Jan 23 11:35:08 compute-0 sudo[157240]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:35:08 compute-0 python3.9[157242]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:35:08 compute-0 sudo[157240]: pam_unix(sudo:session): session closed for user root
Jan 23 11:35:09 compute-0 sudo[157363]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ttsppeftuaklchakgekyjuqigjsjrxub ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168108.3195953-1123-70399056839474/AnsiballZ_copy.py'
Jan 23 11:35:09 compute-0 sudo[157363]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:35:09 compute-0 python3.9[157365]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1769168108.3195953-1123-70399056839474/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:35:09 compute-0 sudo[157363]: pam_unix(sudo:session): session closed for user root
Jan 23 11:35:10 compute-0 sudo[157515]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ktmnbnztejcwfzzflgdaqqbstmajwibz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168109.8077037-1139-90791492295441/AnsiballZ_file.py'
Jan 23 11:35:10 compute-0 sudo[157515]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:35:10 compute-0 python3.9[157517]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:35:10 compute-0 sudo[157515]: pam_unix(sudo:session): session closed for user root
Jan 23 11:35:10 compute-0 sudo[157667]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-potwgnfvbvygvtrlibctxgwjinbwjung ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168110.4675992-1147-280794866167684/AnsiballZ_stat.py'
Jan 23 11:35:10 compute-0 sudo[157667]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:35:10 compute-0 python3.9[157669]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:35:10 compute-0 sudo[157667]: pam_unix(sudo:session): session closed for user root
Jan 23 11:35:11 compute-0 sudo[157745]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zafhlmgpllqgasoypuksnzxthioiolad ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168110.4675992-1147-280794866167684/AnsiballZ_file.py'
Jan 23 11:35:11 compute-0 sudo[157745]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:35:11 compute-0 python3.9[157747]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:35:11 compute-0 sudo[157745]: pam_unix(sudo:session): session closed for user root
Jan 23 11:35:11 compute-0 sudo[157897]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cuwttvovptreyqzmycjyawduedgprzio ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168111.451335-1159-72356639087819/AnsiballZ_stat.py'
Jan 23 11:35:11 compute-0 sudo[157897]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:35:11 compute-0 python3.9[157899]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:35:11 compute-0 sudo[157897]: pam_unix(sudo:session): session closed for user root
Jan 23 11:35:12 compute-0 sudo[157975]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-avbtfxdlfcxppraceynigzxxutqllwdb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168111.451335-1159-72356639087819/AnsiballZ_file.py'
Jan 23 11:35:12 compute-0 sudo[157975]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:35:12 compute-0 python3.9[157977]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.1gt6ym70 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:35:12 compute-0 sudo[157975]: pam_unix(sudo:session): session closed for user root
Jan 23 11:35:12 compute-0 sudo[158127]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eoslutdpjgogxsxtcfyyzhfwgjmfkepo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168112.5268848-1171-191899950301522/AnsiballZ_stat.py'
Jan 23 11:35:12 compute-0 sudo[158127]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:35:12 compute-0 python3.9[158129]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:35:13 compute-0 sudo[158127]: pam_unix(sudo:session): session closed for user root
Jan 23 11:35:13 compute-0 sudo[158205]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-adohqirhdcjboijkorphoflggfrqjbtu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168112.5268848-1171-191899950301522/AnsiballZ_file.py'
Jan 23 11:35:13 compute-0 sudo[158205]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:35:13 compute-0 python3.9[158207]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:35:13 compute-0 sudo[158205]: pam_unix(sudo:session): session closed for user root
Jan 23 11:35:13 compute-0 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Jan 23 11:35:13 compute-0 systemd[1]: setroubleshootd.service: Deactivated successfully.
Jan 23 11:35:13 compute-0 sudo[158357]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ovzikpxefvezfsldhlacwwrbtvhfycbu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168113.7086518-1184-147373051803478/AnsiballZ_command.py'
Jan 23 11:35:13 compute-0 sudo[158357]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:35:14 compute-0 python3.9[158359]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 11:35:14 compute-0 sudo[158357]: pam_unix(sudo:session): session closed for user root
Jan 23 11:35:14 compute-0 sudo[158510]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iuogawsswxdbhgzixnmreuoboqrlonsk ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769168114.3206933-1192-32452854433084/AnsiballZ_edpm_nftables_from_files.py'
Jan 23 11:35:14 compute-0 sudo[158510]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:35:14 compute-0 python3[158512]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 23 11:35:14 compute-0 sudo[158510]: pam_unix(sudo:session): session closed for user root
Jan 23 11:35:15 compute-0 sudo[158662]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lswjmnnvfqyfrkuthoyzhpgryswfefla ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168115.0737364-1200-191451999239456/AnsiballZ_stat.py'
Jan 23 11:35:15 compute-0 sudo[158662]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:35:15 compute-0 python3.9[158664]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:35:15 compute-0 sudo[158662]: pam_unix(sudo:session): session closed for user root
Jan 23 11:35:15 compute-0 sudo[158740]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pbbxkzchgokhvipfyvoctdsbmmnvviym ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168115.0737364-1200-191451999239456/AnsiballZ_file.py'
Jan 23 11:35:15 compute-0 sudo[158740]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:35:16 compute-0 python3.9[158742]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:35:16 compute-0 sudo[158740]: pam_unix(sudo:session): session closed for user root
Jan 23 11:35:16 compute-0 sudo[158892]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pbdynbwsiraofilktytwrtvklhoiynax ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168116.264214-1212-142185958764146/AnsiballZ_stat.py'
Jan 23 11:35:16 compute-0 sudo[158892]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:35:16 compute-0 python3.9[158894]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:35:16 compute-0 sudo[158892]: pam_unix(sudo:session): session closed for user root
Jan 23 11:35:17 compute-0 sudo[159017]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zmksschthojujhzbjwanajqngmensmgb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168116.264214-1212-142185958764146/AnsiballZ_copy.py'
Jan 23 11:35:17 compute-0 sudo[159017]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:35:17 compute-0 python3.9[159019]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769168116.264214-1212-142185958764146/.source.nft follow=False _original_basename=jump-chain.j2 checksum=3ce353c89bce3b135a0ed688d4e338b2efb15185 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:35:17 compute-0 sudo[159017]: pam_unix(sudo:session): session closed for user root
Jan 23 11:35:17 compute-0 sudo[159169]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qekwstcepfhcrsrbcmhhsgqmhduxliqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168117.5066442-1227-133866681293360/AnsiballZ_stat.py'
Jan 23 11:35:17 compute-0 sudo[159169]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:35:18 compute-0 python3.9[159171]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:35:18 compute-0 sudo[159169]: pam_unix(sudo:session): session closed for user root
Jan 23 11:35:18 compute-0 sudo[159247]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jawyagervalhnhwxvujgnmxtfeygwzaf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168117.5066442-1227-133866681293360/AnsiballZ_file.py'
Jan 23 11:35:18 compute-0 sudo[159247]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:35:18 compute-0 python3.9[159249]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:35:18 compute-0 sudo[159247]: pam_unix(sudo:session): session closed for user root
Jan 23 11:35:18 compute-0 sudo[159399]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xdwizzbvjasnvtnqjhsbycwojeawfjer ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168118.7002988-1239-60082902731536/AnsiballZ_stat.py'
Jan 23 11:35:18 compute-0 sudo[159399]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:35:19 compute-0 python3.9[159401]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:35:19 compute-0 sudo[159399]: pam_unix(sudo:session): session closed for user root
Jan 23 11:35:19 compute-0 sudo[159477]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kzzjzxqotrartwovipaoefbctwpsbqql ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168118.7002988-1239-60082902731536/AnsiballZ_file.py'
Jan 23 11:35:19 compute-0 sudo[159477]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:35:19 compute-0 python3.9[159479]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:35:19 compute-0 sudo[159477]: pam_unix(sudo:session): session closed for user root
Jan 23 11:35:20 compute-0 sudo[159629]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ynhegmvdwyoblyjdsqdscicbzdnptstq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168119.767734-1251-162965012801522/AnsiballZ_stat.py'
Jan 23 11:35:20 compute-0 sudo[159629]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:35:20 compute-0 python3.9[159631]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:35:20 compute-0 sudo[159629]: pam_unix(sudo:session): session closed for user root
Jan 23 11:35:20 compute-0 sudo[159754]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rgyhwqycczsaeshbmekqwqlgrrmqenej ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168119.767734-1251-162965012801522/AnsiballZ_copy.py'
Jan 23 11:35:20 compute-0 sudo[159754]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:35:20 compute-0 python3.9[159756]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769168119.767734-1251-162965012801522/.source.nft follow=False _original_basename=ruleset.j2 checksum=8a12d4eb5149b6e500230381c1359a710881e9b0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:35:20 compute-0 sudo[159754]: pam_unix(sudo:session): session closed for user root
Jan 23 11:35:21 compute-0 sudo[159906]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-heqyuhlrwmzgmndyhkdbrdkjblhcrsed ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168120.9595356-1266-247964030261421/AnsiballZ_file.py'
Jan 23 11:35:21 compute-0 sudo[159906]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:35:21 compute-0 python3.9[159908]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:35:21 compute-0 sudo[159906]: pam_unix(sudo:session): session closed for user root
Jan 23 11:35:21 compute-0 sudo[160058]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-msxltvgmnewhldfptlujtdilatywbbzo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168121.6661742-1274-39959351817895/AnsiballZ_command.py'
Jan 23 11:35:21 compute-0 sudo[160058]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:35:22 compute-0 python3.9[160060]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 11:35:22 compute-0 sudo[160058]: pam_unix(sudo:session): session closed for user root
Jan 23 11:35:22 compute-0 sudo[160213]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xmkhoaluefltyftomxrpvoqpddhvqlxh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168122.404724-1282-110358011462436/AnsiballZ_blockinfile.py'
Jan 23 11:35:22 compute-0 sudo[160213]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:35:23 compute-0 python3.9[160215]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:35:23 compute-0 sudo[160213]: pam_unix(sudo:session): session closed for user root
Jan 23 11:35:23 compute-0 sudo[160365]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ggwgowshsrvlcaetastweipibzlfqjqu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168123.2723386-1291-274093623813779/AnsiballZ_command.py'
Jan 23 11:35:23 compute-0 sudo[160365]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:35:23 compute-0 python3.9[160367]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 11:35:23 compute-0 sudo[160365]: pam_unix(sudo:session): session closed for user root
Jan 23 11:35:24 compute-0 sudo[160518]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-laqohsutuxkstkdfqyzdpljpkynojzvc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168123.8580463-1299-187781889372592/AnsiballZ_stat.py'
Jan 23 11:35:24 compute-0 sudo[160518]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:35:24 compute-0 python3.9[160520]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 11:35:24 compute-0 sudo[160518]: pam_unix(sudo:session): session closed for user root
Jan 23 11:35:24 compute-0 sudo[160672]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zdcgmautxeepdxmrhomrfrvsweqprnje ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168124.4691038-1307-202780889659633/AnsiballZ_command.py'
Jan 23 11:35:24 compute-0 sudo[160672]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:35:24 compute-0 python3.9[160674]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 11:35:24 compute-0 sudo[160672]: pam_unix(sudo:session): session closed for user root
Jan 23 11:35:25 compute-0 sudo[160827]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wnjootnrvornxuzidcjootknmsecoqop ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168125.1284475-1315-277576871087/AnsiballZ_file.py'
Jan 23 11:35:25 compute-0 sudo[160827]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:35:25 compute-0 python3.9[160829]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:35:25 compute-0 sudo[160827]: pam_unix(sudo:session): session closed for user root
Jan 23 11:35:26 compute-0 sudo[160979]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dlkaentuybrhnxrgjregjfrftpqysekg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168125.7600298-1323-182427619058362/AnsiballZ_stat.py'
Jan 23 11:35:26 compute-0 sudo[160979]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:35:26 compute-0 python3.9[160981]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:35:26 compute-0 sudo[160979]: pam_unix(sudo:session): session closed for user root
Jan 23 11:35:26 compute-0 sudo[161102]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hysuszlmbdeskktyxsuzypeyzotlgpig ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168125.7600298-1323-182427619058362/AnsiballZ_copy.py'
Jan 23 11:35:26 compute-0 sudo[161102]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:35:26 compute-0 python3.9[161104]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769168125.7600298-1323-182427619058362/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:35:26 compute-0 sudo[161102]: pam_unix(sudo:session): session closed for user root
Jan 23 11:35:27 compute-0 sudo[161254]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gvcxgdqbzpdhvyrsykqeftlcgckcyuor ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168126.9814103-1338-171925854042756/AnsiballZ_stat.py'
Jan 23 11:35:27 compute-0 sudo[161254]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:35:27 compute-0 python3.9[161256]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:35:27 compute-0 sudo[161254]: pam_unix(sudo:session): session closed for user root
Jan 23 11:35:27 compute-0 sudo[161377]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vjwsotyzdfilsuowzhrbwihlxmreoudk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168126.9814103-1338-171925854042756/AnsiballZ_copy.py'
Jan 23 11:35:27 compute-0 sudo[161377]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:35:27 compute-0 python3.9[161379]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769168126.9814103-1338-171925854042756/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:35:27 compute-0 sudo[161377]: pam_unix(sudo:session): session closed for user root
Jan 23 11:35:28 compute-0 sudo[161529]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mvzqdbzxpayhahsaiokslsvbrannbgcy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168128.1324415-1353-24885891711607/AnsiballZ_stat.py'
Jan 23 11:35:28 compute-0 sudo[161529]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:35:28 compute-0 python3.9[161531]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:35:28 compute-0 sudo[161529]: pam_unix(sudo:session): session closed for user root
Jan 23 11:35:29 compute-0 sudo[161652]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fhspeyyywomtihiobekkkgbazhvgxnsn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168128.1324415-1353-24885891711607/AnsiballZ_copy.py'
Jan 23 11:35:29 compute-0 sudo[161652]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:35:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:35:29.074 106832 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 11:35:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:35:29.076 106832 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 11:35:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:35:29.076 106832 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 11:35:29 compute-0 python3.9[161654]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769168128.1324415-1353-24885891711607/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:35:29 compute-0 sudo[161652]: pam_unix(sudo:session): session closed for user root
Jan 23 11:35:29 compute-0 sudo[161804]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-prceiztswdliydbtofjlcofovmxizkly ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168129.4785244-1368-42201034753448/AnsiballZ_systemd.py'
Jan 23 11:35:29 compute-0 sudo[161804]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:35:30 compute-0 python3.9[161806]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 11:35:30 compute-0 systemd[1]: Reloading.
Jan 23 11:35:30 compute-0 systemd-rc-local-generator[161832]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 11:35:30 compute-0 systemd-sysv-generator[161835]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 11:35:30 compute-0 systemd[1]: Reached target edpm_libvirt.target.
Jan 23 11:35:30 compute-0 sudo[161804]: pam_unix(sudo:session): session closed for user root
Jan 23 11:35:30 compute-0 sudo[161995]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jvnzyaakhnlnhshubdnpytovzblhqxgq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168130.5949695-1376-5513329848560/AnsiballZ_systemd.py'
Jan 23 11:35:30 compute-0 sudo[161995]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:35:31 compute-0 python3.9[161997]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Jan 23 11:35:31 compute-0 systemd[1]: Reloading.
Jan 23 11:35:31 compute-0 systemd-rc-local-generator[162022]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 11:35:31 compute-0 systemd-sysv-generator[162025]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 11:35:31 compute-0 systemd[1]: Reloading.
Jan 23 11:35:31 compute-0 systemd-sysv-generator[162065]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 11:35:31 compute-0 systemd-rc-local-generator[162062]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 11:35:31 compute-0 sudo[161995]: pam_unix(sudo:session): session closed for user root
Jan 23 11:35:32 compute-0 sshd-session[107404]: Connection closed by 192.168.122.30 port 52874
Jan 23 11:35:32 compute-0 sshd-session[107401]: pam_unix(sshd:session): session closed for user zuul
Jan 23 11:35:32 compute-0 systemd[1]: session-23.scope: Deactivated successfully.
Jan 23 11:35:32 compute-0 systemd[1]: session-23.scope: Consumed 3min 10.039s CPU time.
Jan 23 11:35:32 compute-0 systemd-logind[798]: Session 23 logged out. Waiting for processes to exit.
Jan 23 11:35:32 compute-0 systemd-logind[798]: Removed session 23.
Jan 23 11:35:32 compute-0 podman[162094]: 2026-01-23 11:35:32.201313658 +0000 UTC m=+0.060060756 container health_status d96827cd9c29e53bbdf4cef10942608e4ba405294733072b4aa624c0238e2ed8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 23 11:35:33 compute-0 podman[162114]: 2026-01-23 11:35:33.815029219 +0000 UTC m=+0.137736367 container health_status 1cc877fed4914980324cf4c0d6ba23743fd113442cee4d49cc1a59e402757170 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Jan 23 11:35:38 compute-0 sshd-session[162143]: Accepted publickey for zuul from 192.168.122.30 port 40282 ssh2: ECDSA SHA256:AUEDGm/wgPOySUg5KweIs4KJvJDZMkuE7T7y2BxO92Y
Jan 23 11:35:38 compute-0 systemd-logind[798]: New session 24 of user zuul.
Jan 23 11:35:38 compute-0 systemd[1]: Started Session 24 of User zuul.
Jan 23 11:35:38 compute-0 sshd-session[162143]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 23 11:35:39 compute-0 python3.9[162296]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 11:35:40 compute-0 python3.9[162450]: ansible-ansible.builtin.service_facts Invoked
Jan 23 11:35:40 compute-0 network[162467]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 23 11:35:40 compute-0 network[162468]: 'network-scripts' will be removed from distribution in near future.
Jan 23 11:35:40 compute-0 network[162469]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 23 11:35:44 compute-0 sudo[162738]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-unscftuajahhkurlduktgupedcreybjr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168143.925084-42-175844998298910/AnsiballZ_setup.py'
Jan 23 11:35:44 compute-0 sudo[162738]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:35:44 compute-0 python3.9[162740]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 23 11:35:44 compute-0 sudo[162738]: pam_unix(sudo:session): session closed for user root
Jan 23 11:35:45 compute-0 sudo[162822]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zexcnbnlnscophzvmsxjubofkmhkarfa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168143.925084-42-175844998298910/AnsiballZ_dnf.py'
Jan 23 11:35:45 compute-0 sudo[162822]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:35:45 compute-0 python3.9[162824]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 23 11:35:50 compute-0 sudo[162822]: pam_unix(sudo:session): session closed for user root
Jan 23 11:35:51 compute-0 sudo[162975]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uusjgvbcjvhhgjssquthmhxtpngtlwlq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168150.7228045-54-152573926002231/AnsiballZ_stat.py'
Jan 23 11:35:51 compute-0 sudo[162975]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:35:51 compute-0 python3.9[162977]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 11:35:51 compute-0 sudo[162975]: pam_unix(sudo:session): session closed for user root
Jan 23 11:35:54 compute-0 sudo[163127]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-usdmvjbliyfnwzhqmbjmbwnodmhuqlxk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168154.0590906-64-217522450244720/AnsiballZ_command.py'
Jan 23 11:35:54 compute-0 sudo[163127]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:35:54 compute-0 python3.9[163129]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 11:35:54 compute-0 sudo[163127]: pam_unix(sudo:session): session closed for user root
Jan 23 11:35:55 compute-0 sudo[163280]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mkwhcsckukpxopbeelgztryvydzjebro ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168155.0019016-74-265042752234902/AnsiballZ_stat.py'
Jan 23 11:35:55 compute-0 sudo[163280]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:35:55 compute-0 python3.9[163282]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 11:35:55 compute-0 sudo[163280]: pam_unix(sudo:session): session closed for user root
Jan 23 11:35:55 compute-0 sudo[163432]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bsremvshhsorsrokhsfqpedumaqwdzsz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168155.598308-82-3200046180137/AnsiballZ_command.py'
Jan 23 11:35:55 compute-0 sudo[163432]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:35:56 compute-0 python3.9[163434]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/iscsi-iname _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 11:35:56 compute-0 sudo[163432]: pam_unix(sudo:session): session closed for user root
Jan 23 11:35:56 compute-0 sudo[163585]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pkameuduhitczrwytdkjxbevbyybxlmb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168156.2291372-90-245572359812507/AnsiballZ_stat.py'
Jan 23 11:35:56 compute-0 sudo[163585]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:35:56 compute-0 python3.9[163587]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:35:56 compute-0 sudo[163585]: pam_unix(sudo:session): session closed for user root
Jan 23 11:35:57 compute-0 sudo[163708]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-crocumpmxxcskewckeohkjtyekdbgjod ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168156.2291372-90-245572359812507/AnsiballZ_copy.py'
Jan 23 11:35:57 compute-0 sudo[163708]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:35:57 compute-0 python3.9[163710]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769168156.2291372-90-245572359812507/.source.iscsi _original_basename=.lpd76oji follow=False checksum=129ef046b2fba05dba2ef69fd779035a999447cd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:35:57 compute-0 sudo[163708]: pam_unix(sudo:session): session closed for user root
Jan 23 11:35:58 compute-0 sudo[163860]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-utdgppcfvlprygfghbakrdhhczgecxcg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168157.5120854-105-156761468681061/AnsiballZ_file.py'
Jan 23 11:35:58 compute-0 sudo[163860]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:35:58 compute-0 python3.9[163862]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:35:58 compute-0 sudo[163860]: pam_unix(sudo:session): session closed for user root
Jan 23 11:35:58 compute-0 sudo[164012]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rtuhtnvparbhquvfzusjzxckirrcfeex ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168158.4296906-113-279848269433574/AnsiballZ_lineinfile.py'
Jan 23 11:35:58 compute-0 sudo[164012]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:35:59 compute-0 python3.9[164014]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:35:59 compute-0 sudo[164012]: pam_unix(sudo:session): session closed for user root
Jan 23 11:35:59 compute-0 sudo[164164]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kopmcqzuusfwvwsccxzbtjaihzrxraox ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168159.315911-122-158093779677588/AnsiballZ_systemd_service.py'
Jan 23 11:35:59 compute-0 sudo[164164]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:36:00 compute-0 python3.9[164166]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 11:36:00 compute-0 systemd[1]: Listening on Open-iSCSI iscsid Socket.
Jan 23 11:36:00 compute-0 sudo[164164]: pam_unix(sudo:session): session closed for user root
Jan 23 11:36:00 compute-0 sudo[164320]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ficlpjymyrclkxuekzjcunkxfhisrkwd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168160.4567413-130-207422351361391/AnsiballZ_systemd_service.py'
Jan 23 11:36:00 compute-0 sudo[164320]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:36:01 compute-0 python3.9[164322]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 11:36:02 compute-0 systemd[1]: Reloading.
Jan 23 11:36:02 compute-0 systemd-sysv-generator[164356]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 11:36:02 compute-0 systemd-rc-local-generator[164353]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 11:36:02 compute-0 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Jan 23 11:36:02 compute-0 systemd[1]: Starting Open-iSCSI...
Jan 23 11:36:02 compute-0 kernel: Loading iSCSI transport class v2.0-870.
Jan 23 11:36:02 compute-0 systemd[1]: Started Open-iSCSI.
Jan 23 11:36:02 compute-0 systemd[1]: Starting Logout off all iSCSI sessions on shutdown...
Jan 23 11:36:02 compute-0 systemd[1]: Finished Logout off all iSCSI sessions on shutdown.
Jan 23 11:36:02 compute-0 sudo[164320]: pam_unix(sudo:session): session closed for user root
Jan 23 11:36:02 compute-0 podman[164362]: 2026-01-23 11:36:02.565254229 +0000 UTC m=+0.104991319 container health_status d96827cd9c29e53bbdf4cef10942608e4ba405294733072b4aa624c0238e2ed8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 23 11:36:03 compute-0 python3.9[164540]: ansible-ansible.builtin.service_facts Invoked
Jan 23 11:36:03 compute-0 network[164557]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 23 11:36:03 compute-0 network[164558]: 'network-scripts' will be removed from distribution in near future.
Jan 23 11:36:03 compute-0 network[164559]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 23 11:36:04 compute-0 podman[164566]: 2026-01-23 11:36:04.28947551 +0000 UTC m=+0.113757066 container health_status 1cc877fed4914980324cf4c0d6ba23743fd113442cee4d49cc1a59e402757170 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, managed_by=edpm_ansible)
Jan 23 11:36:08 compute-0 sudo[164852]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-psananumwhhccmrlognaqzvaszzlmrjt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168167.7168033-153-195106225253724/AnsiballZ_dnf.py'
Jan 23 11:36:08 compute-0 sudo[164852]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:36:08 compute-0 python3.9[164854]: ansible-ansible.legacy.dnf Invoked with name=['device-mapper-multipath'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 23 11:36:10 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 23 11:36:10 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 23 11:36:10 compute-0 systemd[1]: Reloading.
Jan 23 11:36:10 compute-0 systemd-sysv-generator[164901]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 11:36:10 compute-0 systemd-rc-local-generator[164896]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 11:36:10 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 23 11:36:10 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 23 11:36:10 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 23 11:36:10 compute-0 systemd[1]: run-r6702e4370a604aefbbb9d3ac8f7e8080.service: Deactivated successfully.
Jan 23 11:36:11 compute-0 sudo[164852]: pam_unix(sudo:session): session closed for user root
Jan 23 11:36:11 compute-0 sudo[165168]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gcscjbdedtcmbtelfcpgjqqnqfovtphd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168171.3690202-162-100951925301570/AnsiballZ_file.py'
Jan 23 11:36:11 compute-0 sudo[165168]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:36:11 compute-0 python3.9[165170]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Jan 23 11:36:11 compute-0 sudo[165168]: pam_unix(sudo:session): session closed for user root
Jan 23 11:36:12 compute-0 sudo[165320]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qoqqhtaevflpxswegftvkcojwiplwjoq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168172.0759616-170-191477134658219/AnsiballZ_modprobe.py'
Jan 23 11:36:12 compute-0 sudo[165320]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:36:12 compute-0 python3.9[165322]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Jan 23 11:36:12 compute-0 sudo[165320]: pam_unix(sudo:session): session closed for user root
Jan 23 11:36:13 compute-0 sudo[165476]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gtxcikgphlttupzxxpnhpbegfdngvfbd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168172.9199362-178-219319498726121/AnsiballZ_stat.py'
Jan 23 11:36:13 compute-0 sudo[165476]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:36:13 compute-0 python3.9[165478]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:36:13 compute-0 sudo[165476]: pam_unix(sudo:session): session closed for user root
Jan 23 11:36:13 compute-0 sudo[165599]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-feqlzroxubjeuhfsgisgbqyghmputjjv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168172.9199362-178-219319498726121/AnsiballZ_copy.py'
Jan 23 11:36:13 compute-0 sudo[165599]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:36:13 compute-0 python3.9[165601]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769168172.9199362-178-219319498726121/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:36:13 compute-0 sudo[165599]: pam_unix(sudo:session): session closed for user root
Jan 23 11:36:14 compute-0 sudo[165751]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wvebcinkafkkfltwhhatfxgznyxyjgdn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168174.1867304-194-2505675346450/AnsiballZ_lineinfile.py'
Jan 23 11:36:14 compute-0 sudo[165751]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:36:14 compute-0 python3.9[165753]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:36:14 compute-0 sudo[165751]: pam_unix(sudo:session): session closed for user root
Jan 23 11:36:15 compute-0 sudo[165903]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rwqkciuxnovdyedecyhsboghcnpknnjn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168174.8514638-202-241371245778966/AnsiballZ_systemd.py'
Jan 23 11:36:15 compute-0 sudo[165903]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:36:15 compute-0 python3.9[165905]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 23 11:36:15 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Jan 23 11:36:15 compute-0 systemd[1]: Stopped Load Kernel Modules.
Jan 23 11:36:15 compute-0 systemd[1]: Stopping Load Kernel Modules...
Jan 23 11:36:15 compute-0 systemd[1]: Starting Load Kernel Modules...
Jan 23 11:36:15 compute-0 systemd[1]: Finished Load Kernel Modules.
Jan 23 11:36:15 compute-0 sudo[165903]: pam_unix(sudo:session): session closed for user root
Jan 23 11:36:16 compute-0 sudo[166059]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-todlnmwacuxgppvcunwcwqxoaziohprc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168175.9904962-210-217988560603971/AnsiballZ_command.py'
Jan 23 11:36:16 compute-0 sudo[166059]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:36:16 compute-0 python3.9[166061]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/multipath _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 11:36:16 compute-0 sudo[166059]: pam_unix(sudo:session): session closed for user root
Jan 23 11:36:17 compute-0 sudo[166212]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmvtvxpsnjvnptnqdhfogbetqdlafoep ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168176.7704868-220-238521944681185/AnsiballZ_stat.py'
Jan 23 11:36:17 compute-0 sudo[166212]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:36:17 compute-0 python3.9[166214]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 11:36:17 compute-0 sudo[166212]: pam_unix(sudo:session): session closed for user root
Jan 23 11:36:17 compute-0 sudo[166364]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jjwganfgohwyzovsnaxnhxakueqeuqtc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168177.386316-229-83402000677173/AnsiballZ_stat.py'
Jan 23 11:36:17 compute-0 sudo[166364]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:36:17 compute-0 python3.9[166366]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:36:17 compute-0 sudo[166364]: pam_unix(sudo:session): session closed for user root
Jan 23 11:36:18 compute-0 sudo[166487]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hvbecpqewmjdumomjdhfprmcrsyyucpm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168177.386316-229-83402000677173/AnsiballZ_copy.py'
Jan 23 11:36:18 compute-0 sudo[166487]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:36:18 compute-0 python3.9[166489]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769168177.386316-229-83402000677173/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:36:18 compute-0 sudo[166487]: pam_unix(sudo:session): session closed for user root
Jan 23 11:36:18 compute-0 sudo[166639]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rttjonewuoegnpfniqdfdfrwdtqgkqdc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168178.472054-244-276702836148797/AnsiballZ_command.py'
Jan 23 11:36:18 compute-0 sudo[166639]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:36:18 compute-0 python3.9[166641]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 11:36:18 compute-0 sudo[166639]: pam_unix(sudo:session): session closed for user root
Jan 23 11:36:19 compute-0 sudo[166792]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cmppadnrkrweojadztumaawmdklbzbxh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168179.2195446-252-172979637302085/AnsiballZ_lineinfile.py'
Jan 23 11:36:19 compute-0 sudo[166792]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:36:19 compute-0 python3.9[166794]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:36:19 compute-0 sudo[166792]: pam_unix(sudo:session): session closed for user root
Jan 23 11:36:20 compute-0 sudo[166944]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-leeiyvsmnlggsellxxwtzjpaiwbacmtp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168179.9299405-260-239672239367745/AnsiballZ_replace.py'
Jan 23 11:36:20 compute-0 sudo[166944]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:36:20 compute-0 python3.9[166946]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:36:20 compute-0 sudo[166944]: pam_unix(sudo:session): session closed for user root
Jan 23 11:36:21 compute-0 sudo[167096]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tbjtnsqheokqusnjmutmcubptyqkixny ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168180.8462012-268-32798807045582/AnsiballZ_replace.py'
Jan 23 11:36:21 compute-0 sudo[167096]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:36:21 compute-0 python3.9[167098]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:36:21 compute-0 sudo[167096]: pam_unix(sudo:session): session closed for user root
Jan 23 11:36:22 compute-0 sudo[167248]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kgdrvadbokdrxixqnpfdohocphcfbsig ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168181.7080824-277-58844600911980/AnsiballZ_lineinfile.py'
Jan 23 11:36:22 compute-0 sudo[167248]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:36:22 compute-0 python3.9[167250]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:36:22 compute-0 sudo[167248]: pam_unix(sudo:session): session closed for user root
Jan 23 11:36:22 compute-0 sudo[167400]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-snunkcclossajyuipspgfgadozxquzeq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168182.4303129-277-167826342992741/AnsiballZ_lineinfile.py'
Jan 23 11:36:22 compute-0 sudo[167400]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:36:22 compute-0 python3.9[167402]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:36:22 compute-0 sudo[167400]: pam_unix(sudo:session): session closed for user root
Jan 23 11:36:23 compute-0 sudo[167552]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dqxnuzagjsrhtecffcxvbasqwzapwasw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168183.1425254-277-166343027886554/AnsiballZ_lineinfile.py'
Jan 23 11:36:23 compute-0 sudo[167552]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:36:23 compute-0 python3.9[167554]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:36:23 compute-0 sudo[167552]: pam_unix(sudo:session): session closed for user root
Jan 23 11:36:24 compute-0 sudo[167704]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-avnrybavzlcynzycvtznzetryimqupgs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168183.7801187-277-11992627115547/AnsiballZ_lineinfile.py'
Jan 23 11:36:24 compute-0 sudo[167704]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:36:24 compute-0 python3.9[167706]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:36:24 compute-0 sudo[167704]: pam_unix(sudo:session): session closed for user root
Jan 23 11:36:24 compute-0 sudo[167856]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jglmispviizsplqlgfhklcxihgyajztj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168184.3942225-306-68809682228641/AnsiballZ_stat.py'
Jan 23 11:36:24 compute-0 sudo[167856]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:36:24 compute-0 python3.9[167858]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 11:36:24 compute-0 sudo[167856]: pam_unix(sudo:session): session closed for user root
Jan 23 11:36:25 compute-0 sudo[168010]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xoopwpmddnkzylpmkuoyhijooizfccob ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168185.0294967-314-74373511937212/AnsiballZ_command.py'
Jan 23 11:36:25 compute-0 sudo[168010]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:36:25 compute-0 python3.9[168012]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/true _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 11:36:25 compute-0 sudo[168010]: pam_unix(sudo:session): session closed for user root
Jan 23 11:36:25 compute-0 sudo[168163]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lcwfbhibvumqxcpdcppmwlppkntktqvt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168185.657962-323-41062088411883/AnsiballZ_systemd_service.py'
Jan 23 11:36:25 compute-0 sudo[168163]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:36:26 compute-0 python3.9[168165]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=multipathd.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 11:36:26 compute-0 systemd[1]: Listening on multipathd control socket.
Jan 23 11:36:26 compute-0 sudo[168163]: pam_unix(sudo:session): session closed for user root
Jan 23 11:36:26 compute-0 sudo[168319]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mmpgpgxzwibkajtaofstyybbytvxnepb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168186.4859734-331-28036740305654/AnsiballZ_systemd_service.py'
Jan 23 11:36:26 compute-0 sudo[168319]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:36:27 compute-0 python3.9[168321]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=multipathd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 11:36:27 compute-0 systemd[1]: Starting Wait for udev To Complete Device Initialization...
Jan 23 11:36:27 compute-0 udevadm[168326]: systemd-udev-settle.service is deprecated. Please fix multipathd.service not to pull it in.
Jan 23 11:36:27 compute-0 systemd[1]: Finished Wait for udev To Complete Device Initialization.
Jan 23 11:36:27 compute-0 systemd[1]: Starting Device-Mapper Multipath Device Controller...
Jan 23 11:36:27 compute-0 multipathd[168329]: --------start up--------
Jan 23 11:36:27 compute-0 multipathd[168329]: read /etc/multipath.conf
Jan 23 11:36:27 compute-0 multipathd[168329]: path checkers start up
Jan 23 11:36:27 compute-0 systemd[1]: Started Device-Mapper Multipath Device Controller.
Jan 23 11:36:27 compute-0 sudo[168319]: pam_unix(sudo:session): session closed for user root
Jan 23 11:36:27 compute-0 sudo[168486]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pkiefzthaulrrrjfybbkxjtjtbzwdmvr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168187.6170537-343-86454523909808/AnsiballZ_file.py'
Jan 23 11:36:27 compute-0 sudo[168486]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:36:28 compute-0 python3.9[168488]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Jan 23 11:36:28 compute-0 sudo[168486]: pam_unix(sudo:session): session closed for user root
Jan 23 11:36:28 compute-0 sudo[168638]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hdnzermogwwstnmaqdshyauvzvxwfvjv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168188.2270403-351-174282156203058/AnsiballZ_modprobe.py'
Jan 23 11:36:28 compute-0 sudo[168638]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:36:28 compute-0 python3.9[168640]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Jan 23 11:36:28 compute-0 kernel: Key type psk registered
Jan 23 11:36:28 compute-0 sudo[168638]: pam_unix(sudo:session): session closed for user root
Jan 23 11:36:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:36:29.075 106832 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 11:36:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:36:29.077 106832 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 11:36:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:36:29.077 106832 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 11:36:29 compute-0 sudo[168800]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ccancdewvpxeucyspqbaxdjltbuteaaz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168188.9157214-359-10067394866484/AnsiballZ_stat.py'
Jan 23 11:36:29 compute-0 sudo[168800]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:36:29 compute-0 python3.9[168802]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:36:29 compute-0 sudo[168800]: pam_unix(sudo:session): session closed for user root
Jan 23 11:36:29 compute-0 sudo[168923]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lccwevtihcsoeqajkqttcwzvtwzhrqsq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168188.9157214-359-10067394866484/AnsiballZ_copy.py'
Jan 23 11:36:29 compute-0 sudo[168923]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:36:29 compute-0 python3.9[168925]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769168188.9157214-359-10067394866484/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:36:29 compute-0 sudo[168923]: pam_unix(sudo:session): session closed for user root
Jan 23 11:36:30 compute-0 sudo[169075]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lscmkosdzkxjhmeeeazfxlsgkvsngulw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168190.1634796-375-277364460268333/AnsiballZ_lineinfile.py'
Jan 23 11:36:30 compute-0 sudo[169075]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:36:30 compute-0 python3.9[169077]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:36:30 compute-0 sudo[169075]: pam_unix(sudo:session): session closed for user root
Jan 23 11:36:31 compute-0 sudo[169227]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yrfgjotehlmdmkgcaonmdvcwcpdhdpjb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168190.7922642-383-225409695275535/AnsiballZ_systemd.py'
Jan 23 11:36:31 compute-0 sudo[169227]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:36:31 compute-0 python3.9[169229]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 23 11:36:31 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Jan 23 11:36:31 compute-0 systemd[1]: Stopped Load Kernel Modules.
Jan 23 11:36:31 compute-0 systemd[1]: Stopping Load Kernel Modules...
Jan 23 11:36:31 compute-0 systemd[1]: Starting Load Kernel Modules...
Jan 23 11:36:31 compute-0 systemd[1]: Finished Load Kernel Modules.
Jan 23 11:36:31 compute-0 sudo[169227]: pam_unix(sudo:session): session closed for user root
Jan 23 11:36:31 compute-0 sudo[169384]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yjjfakkdbvbuvgadfyyfyqaopazexrby ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168191.6745565-391-197295633029273/AnsiballZ_dnf.py'
Jan 23 11:36:31 compute-0 sudo[169384]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:36:32 compute-0 python3.9[169386]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 23 11:36:32 compute-0 podman[169388]: 2026-01-23 11:36:32.774368595 +0000 UTC m=+0.089504960 container health_status d96827cd9c29e53bbdf4cef10942608e4ba405294733072b4aa624c0238e2ed8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 23 11:36:34 compute-0 systemd[1]: Reloading.
Jan 23 11:36:34 compute-0 systemd-sysv-generator[169456]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 11:36:34 compute-0 systemd-rc-local-generator[169448]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 11:36:34 compute-0 podman[169412]: 2026-01-23 11:36:34.793987007 +0000 UTC m=+0.112932726 container health_status 1cc877fed4914980324cf4c0d6ba23743fd113442cee4d49cc1a59e402757170 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Jan 23 11:36:34 compute-0 systemd[1]: Reloading.
Jan 23 11:36:35 compute-0 systemd-rc-local-generator[169500]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 11:36:35 compute-0 systemd-sysv-generator[169504]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 11:36:35 compute-0 systemd-logind[798]: Watching system buttons on /dev/input/event0 (Power Button)
Jan 23 11:36:35 compute-0 systemd-logind[798]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Jan 23 11:36:35 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 23 11:36:35 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 23 11:36:35 compute-0 systemd[1]: Reloading.
Jan 23 11:36:35 compute-0 systemd-rc-local-generator[169592]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 11:36:35 compute-0 systemd-sysv-generator[169597]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 11:36:35 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 23 11:36:36 compute-0 sudo[169384]: pam_unix(sudo:session): session closed for user root
Jan 23 11:36:36 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 23 11:36:36 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 23 11:36:36 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1.555s CPU time.
Jan 23 11:36:36 compute-0 systemd[1]: run-re7349a717b7f4933ae574f83ee4a2fc3.service: Deactivated successfully.
Jan 23 11:36:38 compute-0 sudo[170894]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lgcpovbmycmxoutgqiyjcxqjqlytyqez ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168197.7388718-399-157540689269927/AnsiballZ_systemd_service.py'
Jan 23 11:36:38 compute-0 sudo[170894]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:36:38 compute-0 python3.9[170896]: ansible-ansible.builtin.systemd_service Invoked with name=iscsid state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 23 11:36:38 compute-0 systemd[1]: Stopping Open-iSCSI...
Jan 23 11:36:38 compute-0 iscsid[164364]: iscsid shutting down.
Jan 23 11:36:38 compute-0 systemd[1]: iscsid.service: Deactivated successfully.
Jan 23 11:36:38 compute-0 systemd[1]: Stopped Open-iSCSI.
Jan 23 11:36:38 compute-0 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Jan 23 11:36:38 compute-0 systemd[1]: Starting Open-iSCSI...
Jan 23 11:36:38 compute-0 systemd[1]: Started Open-iSCSI.
Jan 23 11:36:38 compute-0 sudo[170894]: pam_unix(sudo:session): session closed for user root
Jan 23 11:36:39 compute-0 sudo[171050]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-thbzotdzkocqzupttxplwrabxbhxenvj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168198.74146-407-88338643241258/AnsiballZ_systemd_service.py'
Jan 23 11:36:39 compute-0 sudo[171050]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:36:39 compute-0 python3.9[171052]: ansible-ansible.builtin.systemd_service Invoked with name=multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 23 11:36:39 compute-0 systemd[1]: Stopping Device-Mapper Multipath Device Controller...
Jan 23 11:36:39 compute-0 multipathd[168329]: exit (signal)
Jan 23 11:36:39 compute-0 multipathd[168329]: --------shut down-------
Jan 23 11:36:39 compute-0 systemd[1]: multipathd.service: Deactivated successfully.
Jan 23 11:36:39 compute-0 systemd[1]: Stopped Device-Mapper Multipath Device Controller.
Jan 23 11:36:39 compute-0 systemd[1]: Starting Device-Mapper Multipath Device Controller...
Jan 23 11:36:39 compute-0 multipathd[171058]: --------start up--------
Jan 23 11:36:39 compute-0 multipathd[171058]: read /etc/multipath.conf
Jan 23 11:36:39 compute-0 multipathd[171058]: path checkers start up
Jan 23 11:36:39 compute-0 systemd[1]: Started Device-Mapper Multipath Device Controller.
Jan 23 11:36:39 compute-0 sudo[171050]: pam_unix(sudo:session): session closed for user root
Jan 23 11:36:40 compute-0 python3.9[171215]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 11:36:41 compute-0 sudo[171369]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vicbgozxvpgtzgkxlrtagyplljeuaiyj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168200.8394804-425-243900650214160/AnsiballZ_file.py'
Jan 23 11:36:41 compute-0 sudo[171369]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:36:41 compute-0 python3.9[171371]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:36:41 compute-0 sudo[171369]: pam_unix(sudo:session): session closed for user root
Jan 23 11:36:41 compute-0 sudo[171521]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qpsrnacnzmgzcrtwdikvgoyygzmupvsr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168201.5858512-436-222749834660964/AnsiballZ_systemd_service.py'
Jan 23 11:36:41 compute-0 sudo[171521]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:36:42 compute-0 python3.9[171523]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 23 11:36:42 compute-0 systemd[1]: Reloading.
Jan 23 11:36:42 compute-0 systemd-sysv-generator[171555]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 11:36:42 compute-0 systemd-rc-local-generator[171552]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 11:36:42 compute-0 sudo[171521]: pam_unix(sudo:session): session closed for user root
Jan 23 11:36:43 compute-0 python3.9[171709]: ansible-ansible.builtin.service_facts Invoked
Jan 23 11:36:43 compute-0 network[171726]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 23 11:36:43 compute-0 network[171727]: 'network-scripts' will be removed from distribution in near future.
Jan 23 11:36:43 compute-0 network[171728]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 23 11:36:43 compute-0 sshd-session[171734]: Invalid user solv from 193.32.162.146 port 55254
Jan 23 11:36:44 compute-0 sshd-session[171734]: Connection closed by invalid user solv 193.32.162.146 port 55254 [preauth]
Jan 23 11:36:49 compute-0 sudo[172000]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ipdrhvgnrsjhxjfykwyzcvfvvvkiwcod ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168209.65242-455-54063282295336/AnsiballZ_systemd_service.py'
Jan 23 11:36:49 compute-0 sudo[172000]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:36:50 compute-0 python3.9[172002]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 11:36:50 compute-0 sudo[172000]: pam_unix(sudo:session): session closed for user root
Jan 23 11:36:50 compute-0 sudo[172153]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vhanmerszvnnkevtvscsnpvfiwaenabk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168210.4984272-455-8342363341006/AnsiballZ_systemd_service.py'
Jan 23 11:36:50 compute-0 sudo[172153]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:36:51 compute-0 python3.9[172155]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 11:36:51 compute-0 sudo[172153]: pam_unix(sudo:session): session closed for user root
Jan 23 11:36:51 compute-0 sudo[172306]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zamlgkfefskxqjpoqvbfdkriluynuvsk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168211.2302322-455-39569172738779/AnsiballZ_systemd_service.py'
Jan 23 11:36:51 compute-0 sudo[172306]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:36:51 compute-0 python3.9[172308]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 11:36:51 compute-0 sudo[172306]: pam_unix(sudo:session): session closed for user root
Jan 23 11:36:52 compute-0 sudo[172459]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hfbyflgqujlkaijxmuwhumrocfnmdcgx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168211.9952123-455-127931397189798/AnsiballZ_systemd_service.py'
Jan 23 11:36:52 compute-0 sudo[172459]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:36:52 compute-0 python3.9[172461]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 11:36:52 compute-0 sudo[172459]: pam_unix(sudo:session): session closed for user root
Jan 23 11:36:52 compute-0 sudo[172612]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ztdfqtancbkjrxumaiyjujnivplqzhha ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168212.708747-455-85109366880457/AnsiballZ_systemd_service.py'
Jan 23 11:36:52 compute-0 sudo[172612]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:36:53 compute-0 python3.9[172614]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 11:36:53 compute-0 sudo[172612]: pam_unix(sudo:session): session closed for user root
Jan 23 11:36:53 compute-0 sudo[172765]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-konqhhsecjdvnaqzuicxuufzucwggcsq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168213.410314-455-76107278799610/AnsiballZ_systemd_service.py'
Jan 23 11:36:53 compute-0 sudo[172765]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:36:53 compute-0 python3.9[172767]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 11:36:54 compute-0 sudo[172765]: pam_unix(sudo:session): session closed for user root
Jan 23 11:36:54 compute-0 sudo[172918]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ygazjundhqvjxefqrlukvsucvidmfhmh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168214.165331-455-272100714393198/AnsiballZ_systemd_service.py'
Jan 23 11:36:54 compute-0 sudo[172918]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:36:54 compute-0 python3.9[172920]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 11:36:54 compute-0 sudo[172918]: pam_unix(sudo:session): session closed for user root
Jan 23 11:36:55 compute-0 sudo[173071]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mgfjdhzypjzbdxlyzoeiwyaddpquluzv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168214.933793-455-140705200776799/AnsiballZ_systemd_service.py'
Jan 23 11:36:55 compute-0 sudo[173071]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:36:55 compute-0 python3.9[173073]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 11:36:55 compute-0 sudo[173071]: pam_unix(sudo:session): session closed for user root
Jan 23 11:36:56 compute-0 sudo[173224]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-leaqseoljumtrxavnrlfyipcyrdnodnd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168216.0170417-514-143015471704129/AnsiballZ_file.py'
Jan 23 11:36:56 compute-0 sudo[173224]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:36:56 compute-0 python3.9[173226]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:36:56 compute-0 sudo[173224]: pam_unix(sudo:session): session closed for user root
Jan 23 11:36:56 compute-0 sudo[173376]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ajhsjferhqeeptlzezlwqophlagjaleo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168216.6583-514-22600489282569/AnsiballZ_file.py'
Jan 23 11:36:56 compute-0 sudo[173376]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:36:57 compute-0 python3.9[173378]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:36:57 compute-0 sudo[173376]: pam_unix(sudo:session): session closed for user root
Jan 23 11:36:57 compute-0 sudo[173528]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fpijdiqklblxjqfjfnsdsyxaifdzuold ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168217.2169304-514-263179332361466/AnsiballZ_file.py'
Jan 23 11:36:57 compute-0 sudo[173528]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:36:57 compute-0 python3.9[173530]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:36:57 compute-0 sudo[173528]: pam_unix(sudo:session): session closed for user root
Jan 23 11:36:58 compute-0 sudo[173680]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xvemrwttpphkxpcbtiyirhbzujfbqgoc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168217.7932086-514-36175517880225/AnsiballZ_file.py'
Jan 23 11:36:58 compute-0 sudo[173680]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:36:58 compute-0 python3.9[173682]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:36:58 compute-0 sudo[173680]: pam_unix(sudo:session): session closed for user root
Jan 23 11:36:58 compute-0 sudo[173832]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ygpjhzmfbyxmdwhsuijkmndigwvrfqjc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168218.3798716-514-264690308113222/AnsiballZ_file.py'
Jan 23 11:36:58 compute-0 sudo[173832]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:36:58 compute-0 python3.9[173834]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:36:58 compute-0 sudo[173832]: pam_unix(sudo:session): session closed for user root
Jan 23 11:36:59 compute-0 sudo[173984]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rbsubesfmirghbyuvxknmnrhatypovuk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168218.9549716-514-126815368206205/AnsiballZ_file.py'
Jan 23 11:36:59 compute-0 sudo[173984]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:36:59 compute-0 python3.9[173986]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:36:59 compute-0 sudo[173984]: pam_unix(sudo:session): session closed for user root
Jan 23 11:36:59 compute-0 sudo[174136]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nfjwzxdqdbdhpgzrszdgtweomtwcsqpp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168219.6045024-514-47232464645369/AnsiballZ_file.py'
Jan 23 11:36:59 compute-0 sudo[174136]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:37:00 compute-0 python3.9[174138]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:37:00 compute-0 sudo[174136]: pam_unix(sudo:session): session closed for user root
Jan 23 11:37:00 compute-0 sudo[174288]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nqzcoxqltojmbrkwxdixwjqjhywuzufd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168220.2936268-514-220824920795825/AnsiballZ_file.py'
Jan 23 11:37:00 compute-0 sudo[174288]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:37:00 compute-0 python3.9[174290]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:37:00 compute-0 sudo[174288]: pam_unix(sudo:session): session closed for user root
Jan 23 11:37:01 compute-0 sudo[174440]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djmxcphgwmvypestralepxfhebzseije ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168221.0062573-571-267949173714632/AnsiballZ_file.py'
Jan 23 11:37:01 compute-0 sudo[174440]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:37:01 compute-0 python3.9[174442]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:37:01 compute-0 sudo[174440]: pam_unix(sudo:session): session closed for user root
Jan 23 11:37:01 compute-0 systemd[1]: virtnodedevd.service: Deactivated successfully.
Jan 23 11:37:01 compute-0 sudo[174593]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pifzhkzxnfgnakbijgildybrvllifwsq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168221.6383274-571-18797379827293/AnsiballZ_file.py'
Jan 23 11:37:01 compute-0 sudo[174593]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:37:02 compute-0 python3.9[174595]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:37:02 compute-0 sudo[174593]: pam_unix(sudo:session): session closed for user root
Jan 23 11:37:02 compute-0 sudo[174745]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ckdbpgzjltsgxybbbnjdtdheucatheez ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168222.31659-571-167462668770636/AnsiballZ_file.py'
Jan 23 11:37:02 compute-0 sudo[174745]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:37:02 compute-0 python3.9[174747]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:37:02 compute-0 sudo[174745]: pam_unix(sudo:session): session closed for user root
Jan 23 11:37:02 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Jan 23 11:37:03 compute-0 podman[174753]: 2026-01-23 11:37:03.009089825 +0000 UTC m=+0.079243591 container health_status d96827cd9c29e53bbdf4cef10942608e4ba405294733072b4aa624c0238e2ed8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 23 11:37:03 compute-0 sudo[174918]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nibljpbnnvxbttuopvarqgzjvlnhyedy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168223.0237808-571-79926436557041/AnsiballZ_file.py'
Jan 23 11:37:03 compute-0 sudo[174918]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:37:03 compute-0 python3.9[174920]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:37:03 compute-0 sudo[174918]: pam_unix(sudo:session): session closed for user root
Jan 23 11:37:03 compute-0 sudo[175070]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ebjiluqeurtosobqpxkdsaukqzcyaqle ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168223.671733-571-230243251307640/AnsiballZ_file.py'
Jan 23 11:37:03 compute-0 sudo[175070]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:37:04 compute-0 python3.9[175072]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:37:04 compute-0 sudo[175070]: pam_unix(sudo:session): session closed for user root
Jan 23 11:37:04 compute-0 systemd[1]: virtqemud.service: Deactivated successfully.
Jan 23 11:37:04 compute-0 sudo[175223]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qcegtuarinbiyulldhiwfwdvxdcnnvji ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168224.3716376-571-235912979560522/AnsiballZ_file.py'
Jan 23 11:37:04 compute-0 sudo[175223]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:37:04 compute-0 python3.9[175225]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:37:04 compute-0 sudo[175223]: pam_unix(sudo:session): session closed for user root
Jan 23 11:37:05 compute-0 sudo[175391]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dvalzzkdqjqsztgbrnhafqsmovwfbitt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168225.0390556-571-106760654357089/AnsiballZ_file.py'
Jan 23 11:37:05 compute-0 sudo[175391]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:37:05 compute-0 podman[175349]: 2026-01-23 11:37:05.354967501 +0000 UTC m=+0.078783219 container health_status 1cc877fed4914980324cf4c0d6ba23743fd113442cee4d49cc1a59e402757170 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, io.buildah.version=1.41.3)
Jan 23 11:37:05 compute-0 python3.9[175397]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:37:05 compute-0 sudo[175391]: pam_unix(sudo:session): session closed for user root
Jan 23 11:37:05 compute-0 sudo[175552]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-auyaqrowfcbnnmcgeefxogmgeassyupg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168225.6849535-571-103705778700924/AnsiballZ_file.py'
Jan 23 11:37:05 compute-0 sudo[175552]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:37:06 compute-0 python3.9[175554]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:37:06 compute-0 sudo[175552]: pam_unix(sudo:session): session closed for user root
Jan 23 11:37:06 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Jan 23 11:37:06 compute-0 sudo[175705]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yobwvcinzxcmzuxvvagokdfnduaoplpb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168226.4014523-629-76417010461211/AnsiballZ_command.py'
Jan 23 11:37:06 compute-0 sudo[175705]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:37:06 compute-0 python3.9[175707]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 11:37:06 compute-0 sudo[175705]: pam_unix(sudo:session): session closed for user root
Jan 23 11:37:07 compute-0 python3.9[175859]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 23 11:37:08 compute-0 sudo[176009]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vsycbgiouyglwfnquwicmabbjbzpvcjy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168227.909302-647-27848236882077/AnsiballZ_systemd_service.py'
Jan 23 11:37:08 compute-0 sudo[176009]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:37:08 compute-0 python3.9[176011]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 23 11:37:08 compute-0 systemd[1]: Reloading.
Jan 23 11:37:08 compute-0 systemd-rc-local-generator[176032]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 11:37:08 compute-0 systemd-sysv-generator[176035]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 11:37:08 compute-0 sudo[176009]: pam_unix(sudo:session): session closed for user root
Jan 23 11:37:09 compute-0 sudo[176196]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sybrzgmffmapcowzekfoqwvhhtpikpwv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168229.034966-655-59596593188744/AnsiballZ_command.py'
Jan 23 11:37:09 compute-0 sudo[176196]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:37:09 compute-0 python3.9[176198]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 11:37:09 compute-0 sudo[176196]: pam_unix(sudo:session): session closed for user root
Jan 23 11:37:09 compute-0 sudo[176349]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qapzskznsifvrfvgaqlafevwguzztfyj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168229.69731-655-43081028825589/AnsiballZ_command.py'
Jan 23 11:37:10 compute-0 sudo[176349]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:37:10 compute-0 python3.9[176351]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 11:37:10 compute-0 sudo[176349]: pam_unix(sudo:session): session closed for user root
Jan 23 11:37:10 compute-0 sudo[176502]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-htzcswthwwcmbgsglmixbkxydjnnesbs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168230.3845313-655-158393336849571/AnsiballZ_command.py'
Jan 23 11:37:10 compute-0 sudo[176502]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:37:10 compute-0 python3.9[176504]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 11:37:10 compute-0 sudo[176502]: pam_unix(sudo:session): session closed for user root
Jan 23 11:37:11 compute-0 sudo[176655]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pfliejifqydptgkcgxhzvhcsfbhviqgf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168231.0163457-655-31639426565330/AnsiballZ_command.py'
Jan 23 11:37:11 compute-0 sudo[176655]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:37:11 compute-0 python3.9[176657]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 11:37:11 compute-0 sudo[176655]: pam_unix(sudo:session): session closed for user root
Jan 23 11:37:12 compute-0 sudo[176808]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-furjmupwbnzqtboayveoqbdbehjciqgs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168231.747546-655-92031317394821/AnsiballZ_command.py'
Jan 23 11:37:12 compute-0 sudo[176808]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:37:12 compute-0 python3.9[176810]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 11:37:12 compute-0 sudo[176808]: pam_unix(sudo:session): session closed for user root
Jan 23 11:37:12 compute-0 sudo[176961]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ondhkeueiwxbfklmgilabiszhhvbzngt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168232.3890808-655-110510716916497/AnsiballZ_command.py'
Jan 23 11:37:12 compute-0 sudo[176961]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:37:12 compute-0 python3.9[176963]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 11:37:12 compute-0 sudo[176961]: pam_unix(sudo:session): session closed for user root
Jan 23 11:37:13 compute-0 sudo[177114]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mogfkqmsoxvyqugbbizxmasfjwsyntkt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168233.078387-655-81106435151990/AnsiballZ_command.py'
Jan 23 11:37:13 compute-0 sudo[177114]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:37:13 compute-0 python3.9[177116]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 11:37:13 compute-0 sudo[177114]: pam_unix(sudo:session): session closed for user root
Jan 23 11:37:13 compute-0 sudo[177267]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yzbyaqafwhnntbfxnbbsklhgrueavnpi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168233.7259703-655-86414727706077/AnsiballZ_command.py'
Jan 23 11:37:13 compute-0 sudo[177267]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:37:14 compute-0 python3.9[177269]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 11:37:14 compute-0 sudo[177267]: pam_unix(sudo:session): session closed for user root
Jan 23 11:37:15 compute-0 sudo[177420]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jhfalvoabcxftolmwfdfguowlxosadgp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168235.1070774-734-205288883003123/AnsiballZ_file.py'
Jan 23 11:37:15 compute-0 sudo[177420]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:37:15 compute-0 python3.9[177422]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 11:37:15 compute-0 sudo[177420]: pam_unix(sudo:session): session closed for user root
Jan 23 11:37:15 compute-0 sudo[177572]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aphlqbyvrhevpkjpzdaapjqhmexxpbph ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168235.734385-734-219407523005759/AnsiballZ_file.py'
Jan 23 11:37:15 compute-0 sudo[177572]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:37:16 compute-0 python3.9[177574]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 11:37:16 compute-0 sudo[177572]: pam_unix(sudo:session): session closed for user root
Jan 23 11:37:16 compute-0 sudo[177724]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wuquagqcpwesjysqhbxbbpwnciokseer ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168236.3507383-734-19592755446422/AnsiballZ_file.py'
Jan 23 11:37:16 compute-0 sudo[177724]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:37:16 compute-0 python3.9[177726]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 11:37:16 compute-0 sudo[177724]: pam_unix(sudo:session): session closed for user root
Jan 23 11:37:17 compute-0 sudo[177876]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mukofbkrzmahdzhevfmtctinappxxcsi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168236.9654996-756-139258041427313/AnsiballZ_file.py'
Jan 23 11:37:17 compute-0 sudo[177876]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:37:17 compute-0 python3.9[177878]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 11:37:17 compute-0 sudo[177876]: pam_unix(sudo:session): session closed for user root
Jan 23 11:37:17 compute-0 sudo[178028]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-flipnjqzqxppzjjmbcucbihkhqkeulfq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168237.7174025-756-112250843762907/AnsiballZ_file.py'
Jan 23 11:37:17 compute-0 sudo[178028]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:37:18 compute-0 python3.9[178030]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 11:37:18 compute-0 sudo[178028]: pam_unix(sudo:session): session closed for user root
Jan 23 11:37:18 compute-0 sudo[178180]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jkvveomtulqroejnxhfrmtlwtvcvldsv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168238.3016245-756-156056373104673/AnsiballZ_file.py'
Jan 23 11:37:18 compute-0 sudo[178180]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:37:18 compute-0 python3.9[178182]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 11:37:18 compute-0 sudo[178180]: pam_unix(sudo:session): session closed for user root
Jan 23 11:37:19 compute-0 sudo[178332]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rqdnhzwdyhhfuswrtbfbdhklwkzavrvz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168239.0577447-756-197913332805767/AnsiballZ_file.py'
Jan 23 11:37:19 compute-0 sudo[178332]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:37:19 compute-0 python3.9[178334]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 11:37:19 compute-0 sudo[178332]: pam_unix(sudo:session): session closed for user root
Jan 23 11:37:19 compute-0 sudo[178484]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yrmjmfkvadymfqukiynwtryrzgaviqmy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168239.6486254-756-103526405599041/AnsiballZ_file.py'
Jan 23 11:37:19 compute-0 sudo[178484]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:37:20 compute-0 python3.9[178486]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 23 11:37:20 compute-0 sudo[178484]: pam_unix(sudo:session): session closed for user root
Jan 23 11:37:20 compute-0 sudo[178636]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vesurnuwwrzotljbrcuwpakvpcwvrsqs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168240.2179978-756-55628848913127/AnsiballZ_file.py'
Jan 23 11:37:20 compute-0 sudo[178636]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:37:20 compute-0 python3.9[178638]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 23 11:37:20 compute-0 sudo[178636]: pam_unix(sudo:session): session closed for user root
Jan 23 11:37:21 compute-0 sudo[178788]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fzjiagplptzcnmzqmsexzuxojhmfcevj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168240.7747827-756-229840239065426/AnsiballZ_file.py'
Jan 23 11:37:21 compute-0 sudo[178788]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:37:21 compute-0 python3.9[178790]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 23 11:37:21 compute-0 sudo[178788]: pam_unix(sudo:session): session closed for user root
Jan 23 11:37:26 compute-0 sudo[178940]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zrltxjakdzkiykysunpjtmzhmlchqtpv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168245.830357-925-129661493649352/AnsiballZ_getent.py'
Jan 23 11:37:26 compute-0 sudo[178940]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:37:26 compute-0 python3.9[178942]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Jan 23 11:37:26 compute-0 sudo[178940]: pam_unix(sudo:session): session closed for user root
Jan 23 11:37:26 compute-0 sudo[179093]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mnmwjqocfdzjelhflewsjroasutuoyev ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168246.5877383-933-50864154829141/AnsiballZ_group.py'
Jan 23 11:37:26 compute-0 sudo[179093]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:37:27 compute-0 python3.9[179095]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 23 11:37:27 compute-0 groupadd[179096]: group added to /etc/group: name=nova, GID=42436
Jan 23 11:37:27 compute-0 groupadd[179096]: group added to /etc/gshadow: name=nova
Jan 23 11:37:27 compute-0 groupadd[179096]: new group: name=nova, GID=42436
Jan 23 11:37:27 compute-0 sudo[179093]: pam_unix(sudo:session): session closed for user root
Jan 23 11:37:27 compute-0 sudo[179251]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qzqomgprxdynveebchwmnksrzzycprmm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168247.4292903-941-275283758180124/AnsiballZ_user.py'
Jan 23 11:37:27 compute-0 sudo[179251]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:37:28 compute-0 python3.9[179253]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 23 11:37:28 compute-0 useradd[179255]: new user: name=nova, UID=42436, GID=42436, home=/home/nova, shell=/bin/sh, from=/dev/pts/0
Jan 23 11:37:28 compute-0 useradd[179255]: add 'nova' to group 'libvirt'
Jan 23 11:37:28 compute-0 useradd[179255]: add 'nova' to shadow group 'libvirt'
Jan 23 11:37:28 compute-0 sudo[179251]: pam_unix(sudo:session): session closed for user root
Jan 23 11:37:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:37:29.076 106832 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 11:37:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:37:29.077 106832 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 11:37:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:37:29.077 106832 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 11:37:29 compute-0 sshd-session[179286]: Accepted publickey for zuul from 192.168.122.30 port 56974 ssh2: ECDSA SHA256:AUEDGm/wgPOySUg5KweIs4KJvJDZMkuE7T7y2BxO92Y
Jan 23 11:37:29 compute-0 systemd-logind[798]: New session 25 of user zuul.
Jan 23 11:37:29 compute-0 systemd[1]: Started Session 25 of User zuul.
Jan 23 11:37:29 compute-0 sshd-session[179286]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 23 11:37:29 compute-0 sshd-session[179289]: Received disconnect from 192.168.122.30 port 56974:11: disconnected by user
Jan 23 11:37:29 compute-0 sshd-session[179289]: Disconnected from user zuul 192.168.122.30 port 56974
Jan 23 11:37:29 compute-0 sshd-session[179286]: pam_unix(sshd:session): session closed for user zuul
Jan 23 11:37:29 compute-0 systemd[1]: session-25.scope: Deactivated successfully.
Jan 23 11:37:29 compute-0 systemd-logind[798]: Session 25 logged out. Waiting for processes to exit.
Jan 23 11:37:29 compute-0 systemd-logind[798]: Removed session 25.
Jan 23 11:37:29 compute-0 python3.9[179439]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:37:30 compute-0 python3.9[179560]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769168249.4798722-966-72181517453928/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 23 11:37:30 compute-0 python3.9[179710]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:37:31 compute-0 python3.9[179786]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 23 11:37:32 compute-0 python3.9[179936]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:37:32 compute-0 python3.9[180057]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769168251.7066166-966-57967356384030/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 23 11:37:33 compute-0 podman[180181]: 2026-01-23 11:37:33.298234261 +0000 UTC m=+0.071819510 container health_status d96827cd9c29e53bbdf4cef10942608e4ba405294733072b4aa624c0238e2ed8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3)
Jan 23 11:37:33 compute-0 python3.9[180218]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:37:34 compute-0 python3.9[180345]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769168252.904534-966-172020190392226/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=1feba546d0beacad9258164ab79b8a747685ccc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 23 11:37:34 compute-0 python3.9[180495]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:37:35 compute-0 python3.9[180616]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769168254.2126627-966-104347273559901/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 23 11:37:35 compute-0 podman[180617]: 2026-01-23 11:37:35.602218293 +0000 UTC m=+0.072815534 container health_status 1cc877fed4914980324cf4c0d6ba23743fd113442cee4d49cc1a59e402757170 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Jan 23 11:37:36 compute-0 python3.9[180792]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:37:36 compute-0 python3.9[180913]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769168255.651599-966-224203407641069/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 23 11:37:37 compute-0 sudo[181063]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sdtsrtmumydrnsvpmonbzmvnnxrrkzhl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168256.8216372-1049-56512814468825/AnsiballZ_file.py'
Jan 23 11:37:37 compute-0 sudo[181063]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:37:37 compute-0 python3.9[181065]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:37:37 compute-0 sudo[181063]: pam_unix(sudo:session): session closed for user root
Jan 23 11:37:37 compute-0 sudo[181215]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cmypfallhfynbsgazpxxyltfiskwdfko ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168257.4829905-1057-243143402844410/AnsiballZ_copy.py'
Jan 23 11:37:37 compute-0 sudo[181215]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:37:37 compute-0 python3.9[181217]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:37:37 compute-0 sudo[181215]: pam_unix(sudo:session): session closed for user root
Jan 23 11:37:38 compute-0 sudo[181367]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-juxgumalfrtwodddcdcefaufihmpvdzy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168258.0737772-1065-35362525279242/AnsiballZ_stat.py'
Jan 23 11:37:38 compute-0 sudo[181367]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:37:38 compute-0 python3.9[181369]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 11:37:38 compute-0 sudo[181367]: pam_unix(sudo:session): session closed for user root
Jan 23 11:37:39 compute-0 sudo[181519]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kskrmtzyqkekeyrwhaacbdpbiligbjvo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168258.7200744-1073-220764716795434/AnsiballZ_stat.py'
Jan 23 11:37:39 compute-0 sudo[181519]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:37:39 compute-0 python3.9[181521]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:37:39 compute-0 sudo[181519]: pam_unix(sudo:session): session closed for user root
Jan 23 11:37:39 compute-0 sudo[181642]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yevgpciiompzbmklgwqgrgloklnejxxx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168258.7200744-1073-220764716795434/AnsiballZ_copy.py'
Jan 23 11:37:39 compute-0 sudo[181642]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:37:39 compute-0 python3.9[181644]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1769168258.7200744-1073-220764716795434/.source _original_basename=.jwodnnys follow=False checksum=fecb2069e387013b842685a301742608eecba560 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Jan 23 11:37:39 compute-0 sudo[181642]: pam_unix(sudo:session): session closed for user root
Jan 23 11:37:40 compute-0 python3.9[181796]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 11:37:41 compute-0 python3.9[181948]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:37:41 compute-0 python3.9[182069]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769168260.7252643-1099-21250486580115/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=aff5546b44cf4461a7541a94e4cce1332c9b58b0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 23 11:37:42 compute-0 python3.9[182219]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:37:43 compute-0 python3.9[182340]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769168261.9890647-1114-6649139832896/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=60b024e6db49dc6e700fc0d50263944d98d4c034 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 23 11:37:43 compute-0 sudo[182490]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bzvhrgsltrraotvuqebwcfaueuemeepz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168263.3387504-1131-27666349552866/AnsiballZ_container_config_data.py'
Jan 23 11:37:43 compute-0 sudo[182490]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:37:44 compute-0 python3.9[182492]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Jan 23 11:37:44 compute-0 sudo[182490]: pam_unix(sudo:session): session closed for user root
Jan 23 11:37:44 compute-0 sudo[182642]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vtfquwdnbxubbguiqvomfeueguiuwfsp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168264.3661962-1142-8316536235266/AnsiballZ_container_config_hash.py'
Jan 23 11:37:44 compute-0 sudo[182642]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:37:45 compute-0 python3.9[182644]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 23 11:37:45 compute-0 sudo[182642]: pam_unix(sudo:session): session closed for user root
Jan 23 11:37:45 compute-0 sudo[182794]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-imgbupngetkevzobzyzcduazvwzvrpxm ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769168265.4361293-1152-249609488768095/AnsiballZ_edpm_container_manage.py'
Jan 23 11:37:45 compute-0 sudo[182794]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:37:46 compute-0 python3[182796]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json containers=[] log_base_path=/var/log/containers/stdouts debug=False
Jan 23 11:37:46 compute-0 podman[182834]: 2026-01-23 11:37:46.499005986 +0000 UTC m=+0.060126586 container create 9e142cc10497365cd58916dfe8770456c98b43b1ac910456c6f222358992af59 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, container_name=nova_compute_init, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 23 11:37:46 compute-0 podman[182834]: 2026-01-23 11:37:46.465041616 +0000 UTC m=+0.026162296 image pull e3166cc074f328e3b121ff82d56ed43a2542af699baffe6874520fe3837c2b18 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Jan 23 11:37:46 compute-0 python3[182796]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Jan 23 11:37:46 compute-0 sudo[182794]: pam_unix(sudo:session): session closed for user root
Jan 23 11:37:47 compute-0 sudo[183023]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zrmqmcugbcjyttyyvydzbzrwbnqrowvh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168266.818242-1160-184430082508425/AnsiballZ_stat.py'
Jan 23 11:37:47 compute-0 sudo[183023]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:37:47 compute-0 python3.9[183025]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 11:37:47 compute-0 sudo[183023]: pam_unix(sudo:session): session closed for user root
Jan 23 11:37:47 compute-0 sudo[183177]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jagwhaxfvcvkqafqqbfujqpaypupqgkf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168267.663887-1172-185257187919478/AnsiballZ_container_config_data.py'
Jan 23 11:37:47 compute-0 sudo[183177]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:37:48 compute-0 python3.9[183179]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Jan 23 11:37:48 compute-0 sudo[183177]: pam_unix(sudo:session): session closed for user root
Jan 23 11:37:48 compute-0 sudo[183329]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhehldtzqgqsfelpzqcsoqpqamwzrpbd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168268.3876503-1183-50405251526665/AnsiballZ_container_config_hash.py'
Jan 23 11:37:48 compute-0 sudo[183329]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:37:48 compute-0 python3.9[183331]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 23 11:37:48 compute-0 sudo[183329]: pam_unix(sudo:session): session closed for user root
Jan 23 11:37:49 compute-0 sudo[183481]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dbyeqhshaffxygzbbfsllfolzzdkgzol ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769168269.165776-1193-18281432011295/AnsiballZ_edpm_container_manage.py'
Jan 23 11:37:49 compute-0 sudo[183481]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:37:49 compute-0 python3[183483]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json containers=[] log_base_path=/var/log/containers/stdouts debug=False
Jan 23 11:37:49 compute-0 podman[183519]: 2026-01-23 11:37:49.843634418 +0000 UTC m=+0.044649049 container create fd9cada4a983d888eaa352ebce7a53214545ca151757229d65da7e2fe4cdfb0b (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, container_name=nova_compute, org.label-schema.schema-version=1.0, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']})
Jan 23 11:37:49 compute-0 podman[183519]: 2026-01-23 11:37:49.821457063 +0000 UTC m=+0.022471714 image pull e3166cc074f328e3b121ff82d56ed43a2542af699baffe6874520fe3837c2b18 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Jan 23 11:37:49 compute-0 python3[183483]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath --volume /etc/multipath.conf:/etc/multipath.conf:ro,Z --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified kolla_start
Jan 23 11:37:49 compute-0 sudo[183481]: pam_unix(sudo:session): session closed for user root
Jan 23 11:37:50 compute-0 sudo[183707]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-csdbymbhzttvbfjhvxtinuxezfhzgefl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168270.0958815-1201-227383651545700/AnsiballZ_stat.py'
Jan 23 11:37:50 compute-0 sudo[183707]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:37:50 compute-0 python3.9[183709]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 11:37:50 compute-0 sudo[183707]: pam_unix(sudo:session): session closed for user root
Jan 23 11:37:51 compute-0 sudo[183861]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gudacaueojsrfixghgprhdoqiiftuprq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168270.849113-1210-59427478700166/AnsiballZ_file.py'
Jan 23 11:37:51 compute-0 sudo[183861]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:37:51 compute-0 python3.9[183863]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:37:51 compute-0 sudo[183861]: pam_unix(sudo:session): session closed for user root
Jan 23 11:37:51 compute-0 sudo[184012]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zrutytrqmenzxrreacydwzcpkbxnwxve ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168271.337815-1210-9211788972974/AnsiballZ_copy.py'
Jan 23 11:37:51 compute-0 sudo[184012]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:37:51 compute-0 python3.9[184014]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769168271.337815-1210-9211788972974/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:37:51 compute-0 sudo[184012]: pam_unix(sudo:session): session closed for user root
Jan 23 11:37:52 compute-0 sudo[184088]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mxdvwaecujtxdysrelqkfoziozfeqawz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168271.337815-1210-9211788972974/AnsiballZ_systemd.py'
Jan 23 11:37:52 compute-0 sudo[184088]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:37:52 compute-0 python3.9[184090]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 23 11:37:52 compute-0 systemd[1]: Reloading.
Jan 23 11:37:52 compute-0 systemd-rc-local-generator[184112]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 11:37:52 compute-0 systemd-sysv-generator[184117]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 11:37:52 compute-0 sudo[184088]: pam_unix(sudo:session): session closed for user root
Jan 23 11:37:53 compute-0 sudo[184199]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gkplacohgbicxejuipnfqnkcbajcgvkh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168271.337815-1210-9211788972974/AnsiballZ_systemd.py'
Jan 23 11:37:53 compute-0 sudo[184199]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:37:53 compute-0 python3.9[184201]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 11:37:53 compute-0 systemd[1]: Reloading.
Jan 23 11:37:53 compute-0 systemd-rc-local-generator[184228]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 11:37:53 compute-0 systemd-sysv-generator[184233]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 11:37:53 compute-0 systemd[1]: Starting nova_compute container...
Jan 23 11:37:53 compute-0 systemd[1]: Started libcrun container.
Jan 23 11:37:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f5c0d56ee6af3afd216696663d9f90e89cd8dd1ec85e3e77774fc295ed9e0ea/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Jan 23 11:37:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f5c0d56ee6af3afd216696663d9f90e89cd8dd1ec85e3e77774fc295ed9e0ea/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Jan 23 11:37:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f5c0d56ee6af3afd216696663d9f90e89cd8dd1ec85e3e77774fc295ed9e0ea/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Jan 23 11:37:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f5c0d56ee6af3afd216696663d9f90e89cd8dd1ec85e3e77774fc295ed9e0ea/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Jan 23 11:37:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f5c0d56ee6af3afd216696663d9f90e89cd8dd1ec85e3e77774fc295ed9e0ea/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Jan 23 11:37:53 compute-0 podman[184240]: 2026-01-23 11:37:53.873016706 +0000 UTC m=+0.089913082 container init fd9cada4a983d888eaa352ebce7a53214545ca151757229d65da7e2fe4cdfb0b (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Jan 23 11:37:53 compute-0 podman[184240]: 2026-01-23 11:37:53.881353145 +0000 UTC m=+0.098249481 container start fd9cada4a983d888eaa352ebce7a53214545ca151757229d65da7e2fe4cdfb0b (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, container_name=nova_compute, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 23 11:37:53 compute-0 podman[184240]: nova_compute
Jan 23 11:37:53 compute-0 nova_compute[184255]: + sudo -E kolla_set_configs
Jan 23 11:37:53 compute-0 systemd[1]: Started nova_compute container.
Jan 23 11:37:53 compute-0 sudo[184199]: pam_unix(sudo:session): session closed for user root
Jan 23 11:37:53 compute-0 nova_compute[184255]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 23 11:37:53 compute-0 nova_compute[184255]: INFO:__main__:Validating config file
Jan 23 11:37:53 compute-0 nova_compute[184255]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 23 11:37:53 compute-0 nova_compute[184255]: INFO:__main__:Copying service configuration files
Jan 23 11:37:53 compute-0 nova_compute[184255]: INFO:__main__:Deleting /etc/nova/nova.conf
Jan 23 11:37:53 compute-0 nova_compute[184255]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Jan 23 11:37:53 compute-0 nova_compute[184255]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Jan 23 11:37:53 compute-0 nova_compute[184255]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Jan 23 11:37:53 compute-0 nova_compute[184255]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Jan 23 11:37:53 compute-0 nova_compute[184255]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 23 11:37:53 compute-0 nova_compute[184255]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 23 11:37:53 compute-0 nova_compute[184255]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Jan 23 11:37:53 compute-0 nova_compute[184255]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Jan 23 11:37:53 compute-0 nova_compute[184255]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 23 11:37:53 compute-0 nova_compute[184255]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 23 11:37:53 compute-0 nova_compute[184255]: INFO:__main__:Deleting /etc/ceph
Jan 23 11:37:53 compute-0 nova_compute[184255]: INFO:__main__:Creating directory /etc/ceph
Jan 23 11:37:53 compute-0 nova_compute[184255]: INFO:__main__:Setting permission for /etc/ceph
Jan 23 11:37:53 compute-0 nova_compute[184255]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Jan 23 11:37:53 compute-0 nova_compute[184255]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 23 11:37:53 compute-0 nova_compute[184255]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Jan 23 11:37:53 compute-0 nova_compute[184255]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 23 11:37:53 compute-0 nova_compute[184255]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Jan 23 11:37:53 compute-0 nova_compute[184255]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Jan 23 11:37:53 compute-0 nova_compute[184255]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Jan 23 11:37:53 compute-0 nova_compute[184255]: INFO:__main__:Writing out command to execute
Jan 23 11:37:53 compute-0 nova_compute[184255]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Jan 23 11:37:53 compute-0 nova_compute[184255]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 23 11:37:53 compute-0 nova_compute[184255]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 23 11:37:53 compute-0 nova_compute[184255]: ++ cat /run_command
Jan 23 11:37:53 compute-0 nova_compute[184255]: + CMD=nova-compute
Jan 23 11:37:53 compute-0 nova_compute[184255]: + ARGS=
Jan 23 11:37:53 compute-0 nova_compute[184255]: + sudo kolla_copy_cacerts
Jan 23 11:37:53 compute-0 nova_compute[184255]: + [[ ! -n '' ]]
Jan 23 11:37:53 compute-0 nova_compute[184255]: + . kolla_extend_start
Jan 23 11:37:53 compute-0 nova_compute[184255]: Running command: 'nova-compute'
Jan 23 11:37:53 compute-0 nova_compute[184255]: + echo 'Running command: '\''nova-compute'\'''
Jan 23 11:37:53 compute-0 nova_compute[184255]: + umask 0022
Jan 23 11:37:53 compute-0 nova_compute[184255]: + exec nova-compute
Jan 23 11:37:54 compute-0 python3.9[184416]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 11:37:55 compute-0 python3.9[184567]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 11:37:55 compute-0 nova_compute[184255]: 2026-01-23 11:37:55.892 184259 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 23 11:37:55 compute-0 nova_compute[184255]: 2026-01-23 11:37:55.893 184259 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 23 11:37:55 compute-0 nova_compute[184255]: 2026-01-23 11:37:55.893 184259 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 23 11:37:55 compute-0 nova_compute[184255]: 2026-01-23 11:37:55.893 184259 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.028 184259 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.038 184259 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.039 184259 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Jan 23 11:37:56 compute-0 python3.9[184721]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.670 184259 INFO nova.virt.driver [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.757 184259 INFO nova.compute.provider_config [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.780 184259 DEBUG oslo_concurrency.lockutils [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.780 184259 DEBUG oslo_concurrency.lockutils [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.780 184259 DEBUG oslo_concurrency.lockutils [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.781 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.781 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.781 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.781 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.781 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.781 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.782 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.782 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.782 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.782 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.782 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.782 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.782 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.783 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.783 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.783 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.783 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.783 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.783 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.783 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.784 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.784 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.784 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.784 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.784 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.785 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.785 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.785 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.785 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.785 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.785 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.785 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.786 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.786 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.786 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.786 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.786 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.786 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.786 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.787 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.787 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.787 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.787 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.787 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.787 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.787 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.788 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.788 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.788 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.788 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.788 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.788 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.788 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.789 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.789 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.789 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.789 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.789 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.789 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.789 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.789 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.790 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.790 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.790 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.790 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.790 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.790 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.790 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.791 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.791 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.791 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.791 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.791 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.791 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.791 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.792 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.792 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.792 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.792 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.792 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.792 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.792 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.793 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.793 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.793 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.793 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.793 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.793 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.793 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.793 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.794 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.794 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.794 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.794 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.794 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.794 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.794 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.795 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.795 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.795 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.795 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.795 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.795 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.795 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.796 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.796 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.796 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.796 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.796 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.796 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.796 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.796 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.797 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.797 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.797 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.797 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.797 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.797 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.797 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.798 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.798 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.798 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.798 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.798 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.798 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.798 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.798 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.799 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.799 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.799 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.799 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.799 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.799 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.799 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.800 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.800 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.800 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.800 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.800 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.800 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.800 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.801 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.801 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.801 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.801 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.801 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.801 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.801 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.801 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.802 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.802 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.802 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.802 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.802 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.802 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.802 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.803 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.803 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.803 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.803 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.803 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.803 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.803 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.804 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.804 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.804 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.804 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.804 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.804 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.804 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.805 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.805 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.805 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.805 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.805 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.805 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.805 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.806 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.806 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.806 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.806 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.806 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.806 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.807 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.807 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.807 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.807 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.807 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.807 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.807 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.807 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.808 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.808 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.808 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.808 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.808 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.808 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.809 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.809 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.809 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.809 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.809 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.809 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.809 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.810 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.810 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.810 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.810 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.810 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.810 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.811 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.811 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.811 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.811 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.811 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.811 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.811 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.811 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.812 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.812 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.812 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.812 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.812 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.812 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.812 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.813 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.813 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.813 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.813 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.813 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.813 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.814 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.814 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.814 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.814 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.814 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.814 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.814 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.815 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.815 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.815 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.815 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.815 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.815 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.815 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.816 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.816 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.816 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.816 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.816 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.816 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.816 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.817 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.817 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.817 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.817 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.817 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.817 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.817 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.818 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.818 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.818 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.818 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.818 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.818 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.818 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.819 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.819 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.819 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.819 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.819 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.819 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.819 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.820 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.820 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.820 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.820 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.820 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.820 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.820 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.820 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.821 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.821 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.821 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.821 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.821 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.821 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.822 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.822 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.822 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.822 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.822 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.822 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.822 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.823 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.823 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.823 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.823 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.823 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.823 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.823 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.824 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.824 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.824 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.824 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.824 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.824 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.824 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.825 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.825 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.825 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.825 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.825 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.825 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.825 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.826 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.826 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.826 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.826 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.826 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.826 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.826 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.827 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.827 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.827 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.827 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.827 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.827 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.828 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.828 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.828 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.828 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.828 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.828 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.828 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.829 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.829 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.829 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.829 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.829 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.830 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.830 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.830 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.830 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.830 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.830 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.830 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.831 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.831 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.831 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.831 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.831 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.832 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.832 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.832 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.832 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.832 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.833 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.833 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.833 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.833 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.833 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.833 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.833 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.834 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.834 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.834 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.834 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.834 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.834 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.834 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.835 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.835 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.835 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.835 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.835 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.835 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.835 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.836 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.836 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.836 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.836 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.836 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.836 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.836 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.837 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.837 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.837 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.837 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.837 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.837 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.838 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.838 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.838 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.838 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.838 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.838 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.839 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.839 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.839 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.839 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.839 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.840 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.840 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.840 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.840 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.840 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.841 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.841 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.841 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.841 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.841 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.842 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.842 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.842 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.842 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.842 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.843 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.843 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.843 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.843 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.843 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.843 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.844 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.844 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.844 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.844 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.844 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.844 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.844 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.844 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.845 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.845 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.845 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.845 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.845 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.845 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.845 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.846 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.846 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.846 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.846 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.846 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.846 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.846 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.847 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.847 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.847 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.847 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.847 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.847 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.848 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.848 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.848 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.848 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.848 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.848 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.849 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.849 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.849 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.849 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.849 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.850 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.850 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.850 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.850 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.850 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.851 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.851 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] libvirt.images_rbd_ceph_conf   =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.851 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.851 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.851 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] libvirt.images_rbd_glance_store_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.851 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] libvirt.images_rbd_pool        = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.851 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] libvirt.images_type            = qcow2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.852 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.852 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.852 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.852 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.852 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.852 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.852 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.853 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.853 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.853 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.853 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.853 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.853 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.853 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.854 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.854 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.854 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.854 184259 WARNING oslo_config.cfg [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Jan 23 11:37:56 compute-0 nova_compute[184255]: live_migration_uri is deprecated for removal in favor of two other options that
Jan 23 11:37:56 compute-0 nova_compute[184255]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Jan 23 11:37:56 compute-0 nova_compute[184255]: and ``live_migration_inbound_addr`` respectively.
Jan 23 11:37:56 compute-0 nova_compute[184255]: ).  Its value may be silently ignored in the future.
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.855 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.855 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.855 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.855 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.855 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.856 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.856 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.856 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.856 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.856 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.856 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.857 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.857 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.857 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.857 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.857 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.857 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.857 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.858 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] libvirt.rbd_secret_uuid        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.858 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] libvirt.rbd_user               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.858 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.858 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.858 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.858 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.859 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.859 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.859 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.859 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.859 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.859 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.860 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.860 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.860 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.860 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.860 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.861 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.861 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.861 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.861 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.861 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.862 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.862 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.862 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.862 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.862 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.863 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.863 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.863 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.863 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.863 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.864 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.864 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.864 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.864 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.864 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.864 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.864 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.864 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.865 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.865 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.865 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.865 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.865 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.865 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.866 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.866 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.866 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.866 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.866 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.866 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.866 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.867 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.867 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.867 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.867 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.867 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.867 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.867 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.868 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.868 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.868 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.868 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.868 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.868 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.868 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.869 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.869 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.869 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.869 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.869 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.869 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.869 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.870 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.870 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.870 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.870 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.870 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.870 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.870 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.871 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.871 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.871 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.871 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.871 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.871 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.871 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.872 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.872 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.872 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.872 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.872 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.872 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.872 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.873 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.873 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.873 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.873 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.873 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.873 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.873 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.873 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.874 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.874 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.874 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.874 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.874 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.874 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.875 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.875 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.875 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.875 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.875 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.875 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.875 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.875 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.876 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.876 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.876 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.876 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.876 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.876 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.877 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.877 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.877 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.877 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.877 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.877 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.877 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.878 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.878 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.878 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.878 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.878 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.878 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.878 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.879 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.879 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.879 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.879 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.879 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.880 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.880 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.880 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.880 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.881 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.881 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.881 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.881 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.881 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.882 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.882 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.882 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.882 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.882 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.882 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.883 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.883 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.883 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.883 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.883 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.884 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.884 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.884 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.884 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.884 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.885 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.885 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.885 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.885 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.885 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.885 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.886 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.886 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.886 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.886 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.886 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.887 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.887 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.887 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.887 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.887 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.887 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.888 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.888 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.888 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.888 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.888 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.888 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.888 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.889 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.889 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.889 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.889 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.889 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.889 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.889 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.890 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.890 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.890 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.890 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.890 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.890 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.891 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.891 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.891 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.891 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.891 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.891 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.892 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.892 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.892 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.892 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.892 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.892 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.892 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.893 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.893 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.893 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.893 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.893 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.893 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.893 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.894 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.894 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.894 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.894 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.894 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.894 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.895 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.895 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.895 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.895 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.895 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.895 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.896 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.896 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.896 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.896 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.896 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.896 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.896 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.897 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.897 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.897 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.897 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.897 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.897 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.898 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.898 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.898 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.898 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.898 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.899 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.899 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.899 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.899 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.899 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.899 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.899 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.900 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.900 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.900 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.900 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.900 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.900 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.900 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.901 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.901 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.901 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.901 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.901 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.901 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.902 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.902 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.902 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.902 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.902 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.903 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.903 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.903 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.903 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.903 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.903 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.904 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.904 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.904 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.904 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.904 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.904 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.905 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.905 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.905 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.905 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.905 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.905 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.906 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.906 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.906 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.906 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.906 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.906 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.907 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.907 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.907 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.907 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.907 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.907 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.907 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.908 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.908 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.908 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.908 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.908 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.908 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.908 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.909 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.909 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.909 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.909 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.909 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.909 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.910 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.910 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.910 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.910 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.910 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.910 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.910 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.911 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.911 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.911 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.911 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.911 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.911 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.911 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.911 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.912 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.912 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.912 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.912 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.912 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.912 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.913 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.913 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.913 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.913 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.913 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.913 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.914 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.914 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.914 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.914 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.914 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.914 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.914 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.915 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.915 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.915 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.915 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.915 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.915 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.915 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.916 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.916 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.916 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.916 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.916 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.916 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.916 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.917 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.917 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.917 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.917 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.917 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.917 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.917 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.918 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.918 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.918 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.918 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.918 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.918 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.919 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.919 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.919 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.919 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.920 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.920 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.920 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.920 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.920 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.920 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.921 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.921 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.921 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.921 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.922 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.922 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.922 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.922 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.923 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.923 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.923 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.923 184259 DEBUG oslo_service.service [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.924 184259 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.944 184259 DEBUG nova.virt.libvirt.host [None req-35674d47-012b-4158-9100-0041287f554b - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.946 184259 DEBUG nova.virt.libvirt.host [None req-35674d47-012b-4158-9100-0041287f554b - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.946 184259 DEBUG nova.virt.libvirt.host [None req-35674d47-012b-4158-9100-0041287f554b - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Jan 23 11:37:56 compute-0 nova_compute[184255]: 2026-01-23 11:37:56.947 184259 DEBUG nova.virt.libvirt.host [None req-35674d47-012b-4158-9100-0041287f554b - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Jan 23 11:37:56 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Jan 23 11:37:57 compute-0 systemd[1]: Started libvirt QEMU daemon.
Jan 23 11:37:57 compute-0 nova_compute[184255]: 2026-01-23 11:37:57.025 184259 DEBUG nova.virt.libvirt.host [None req-35674d47-012b-4158-9100-0041287f554b - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7febf5e74e50> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Jan 23 11:37:57 compute-0 nova_compute[184255]: 2026-01-23 11:37:57.027 184259 DEBUG nova.virt.libvirt.host [None req-35674d47-012b-4158-9100-0041287f554b - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7febf5e74e50> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Jan 23 11:37:57 compute-0 nova_compute[184255]: 2026-01-23 11:37:57.027 184259 INFO nova.virt.libvirt.driver [None req-35674d47-012b-4158-9100-0041287f554b - - - - - -] Connection event '1' reason 'None'
Jan 23 11:37:57 compute-0 nova_compute[184255]: 2026-01-23 11:37:57.044 184259 WARNING nova.virt.libvirt.driver [None req-35674d47-012b-4158-9100-0041287f554b - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Jan 23 11:37:57 compute-0 nova_compute[184255]: 2026-01-23 11:37:57.045 184259 DEBUG nova.virt.libvirt.volume.mount [None req-35674d47-012b-4158-9100-0041287f554b - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Jan 23 11:37:57 compute-0 sudo[184915]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-btljyhwoxzbcnmexavcqanmppwgjniug ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168276.5907164-1270-152136238107301/AnsiballZ_podman_container.py'
Jan 23 11:37:57 compute-0 sudo[184915]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:37:57 compute-0 python3.9[184924]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Jan 23 11:37:57 compute-0 sudo[184915]: pam_unix(sudo:session): session closed for user root
Jan 23 11:37:57 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 23 11:37:57 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 23 11:37:57 compute-0 nova_compute[184255]: 2026-01-23 11:37:57.854 184259 INFO nova.virt.libvirt.host [None req-35674d47-012b-4158-9100-0041287f554b - - - - - -] Libvirt host capabilities <capabilities>
Jan 23 11:37:57 compute-0 nova_compute[184255]: 
Jan 23 11:37:57 compute-0 nova_compute[184255]:   <host>
Jan 23 11:37:57 compute-0 nova_compute[184255]:     <uuid>850fefef-4162-4be1-a464-e0586d5e52c6</uuid>
Jan 23 11:37:57 compute-0 nova_compute[184255]:     <cpu>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <arch>x86_64</arch>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model>EPYC-Rome-v4</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <vendor>AMD</vendor>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <microcode version='16777317'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <signature family='23' model='49' stepping='0'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <maxphysaddr mode='emulate' bits='40'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <feature name='x2apic'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <feature name='tsc-deadline'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <feature name='osxsave'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <feature name='hypervisor'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <feature name='tsc_adjust'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <feature name='spec-ctrl'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <feature name='stibp'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <feature name='arch-capabilities'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <feature name='ssbd'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <feature name='cmp_legacy'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <feature name='topoext'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <feature name='virt-ssbd'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <feature name='lbrv'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <feature name='tsc-scale'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <feature name='vmcb-clean'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <feature name='pause-filter'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <feature name='pfthreshold'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <feature name='svme-addr-chk'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <feature name='rdctl-no'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <feature name='skip-l1dfl-vmentry'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <feature name='mds-no'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <feature name='pschange-mc-no'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <pages unit='KiB' size='4'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <pages unit='KiB' size='2048'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <pages unit='KiB' size='1048576'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:     </cpu>
Jan 23 11:37:57 compute-0 nova_compute[184255]:     <power_management>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <suspend_mem/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <suspend_disk/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <suspend_hybrid/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:     </power_management>
Jan 23 11:37:57 compute-0 nova_compute[184255]:     <iommu support='no'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:     <migration_features>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <live/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <uri_transports>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <uri_transport>tcp</uri_transport>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <uri_transport>rdma</uri_transport>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </uri_transports>
Jan 23 11:37:57 compute-0 nova_compute[184255]:     </migration_features>
Jan 23 11:37:57 compute-0 nova_compute[184255]:     <topology>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <cells num='1'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <cell id='0'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:           <memory unit='KiB'>7864316</memory>
Jan 23 11:37:57 compute-0 nova_compute[184255]:           <pages unit='KiB' size='4'>1966079</pages>
Jan 23 11:37:57 compute-0 nova_compute[184255]:           <pages unit='KiB' size='2048'>0</pages>
Jan 23 11:37:57 compute-0 nova_compute[184255]:           <pages unit='KiB' size='1048576'>0</pages>
Jan 23 11:37:57 compute-0 nova_compute[184255]:           <distances>
Jan 23 11:37:57 compute-0 nova_compute[184255]:             <sibling id='0' value='10'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:           </distances>
Jan 23 11:37:57 compute-0 nova_compute[184255]:           <cpus num='8'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:           </cpus>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         </cell>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </cells>
Jan 23 11:37:57 compute-0 nova_compute[184255]:     </topology>
Jan 23 11:37:57 compute-0 nova_compute[184255]:     <cache>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:     </cache>
Jan 23 11:37:57 compute-0 nova_compute[184255]:     <secmodel>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model>selinux</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <doi>0</doi>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Jan 23 11:37:57 compute-0 nova_compute[184255]:     </secmodel>
Jan 23 11:37:57 compute-0 nova_compute[184255]:     <secmodel>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model>dac</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <doi>0</doi>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <baselabel type='kvm'>+107:+107</baselabel>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <baselabel type='qemu'>+107:+107</baselabel>
Jan 23 11:37:57 compute-0 nova_compute[184255]:     </secmodel>
Jan 23 11:37:57 compute-0 nova_compute[184255]:   </host>
Jan 23 11:37:57 compute-0 nova_compute[184255]: 
Jan 23 11:37:57 compute-0 nova_compute[184255]:   <guest>
Jan 23 11:37:57 compute-0 nova_compute[184255]:     <os_type>hvm</os_type>
Jan 23 11:37:57 compute-0 nova_compute[184255]:     <arch name='i686'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <wordsize>32</wordsize>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <domain type='qemu'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <domain type='kvm'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:     </arch>
Jan 23 11:37:57 compute-0 nova_compute[184255]:     <features>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <pae/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <nonpae/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <acpi default='on' toggle='yes'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <apic default='on' toggle='no'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <cpuselection/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <deviceboot/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <disksnapshot default='on' toggle='no'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <externalSnapshot/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:     </features>
Jan 23 11:37:57 compute-0 nova_compute[184255]:   </guest>
Jan 23 11:37:57 compute-0 nova_compute[184255]: 
Jan 23 11:37:57 compute-0 nova_compute[184255]:   <guest>
Jan 23 11:37:57 compute-0 nova_compute[184255]:     <os_type>hvm</os_type>
Jan 23 11:37:57 compute-0 nova_compute[184255]:     <arch name='x86_64'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <wordsize>64</wordsize>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <domain type='qemu'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <domain type='kvm'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:     </arch>
Jan 23 11:37:57 compute-0 nova_compute[184255]:     <features>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <acpi default='on' toggle='yes'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <apic default='on' toggle='no'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <cpuselection/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <deviceboot/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <disksnapshot default='on' toggle='no'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <externalSnapshot/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:     </features>
Jan 23 11:37:57 compute-0 nova_compute[184255]:   </guest>
Jan 23 11:37:57 compute-0 nova_compute[184255]: 
Jan 23 11:37:57 compute-0 nova_compute[184255]: </capabilities>
Jan 23 11:37:57 compute-0 nova_compute[184255]: 
Jan 23 11:37:57 compute-0 nova_compute[184255]: 2026-01-23 11:37:57.867 184259 DEBUG nova.virt.libvirt.host [None req-35674d47-012b-4158-9100-0041287f554b - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Jan 23 11:37:57 compute-0 nova_compute[184255]: 2026-01-23 11:37:57.891 184259 DEBUG nova.virt.libvirt.host [None req-35674d47-012b-4158-9100-0041287f554b - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Jan 23 11:37:57 compute-0 nova_compute[184255]: <domainCapabilities>
Jan 23 11:37:57 compute-0 nova_compute[184255]:   <path>/usr/libexec/qemu-kvm</path>
Jan 23 11:37:57 compute-0 nova_compute[184255]:   <domain>kvm</domain>
Jan 23 11:37:57 compute-0 nova_compute[184255]:   <machine>pc-i440fx-rhel7.6.0</machine>
Jan 23 11:37:57 compute-0 nova_compute[184255]:   <arch>i686</arch>
Jan 23 11:37:57 compute-0 nova_compute[184255]:   <vcpu max='240'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:   <iothreads supported='yes'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:   <os supported='yes'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:     <enum name='firmware'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:     <loader supported='yes'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <enum name='type'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <value>rom</value>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <value>pflash</value>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </enum>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <enum name='readonly'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <value>yes</value>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <value>no</value>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </enum>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <enum name='secure'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <value>no</value>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </enum>
Jan 23 11:37:57 compute-0 nova_compute[184255]:     </loader>
Jan 23 11:37:57 compute-0 nova_compute[184255]:   </os>
Jan 23 11:37:57 compute-0 nova_compute[184255]:   <cpu>
Jan 23 11:37:57 compute-0 nova_compute[184255]:     <mode name='host-passthrough' supported='yes'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <enum name='hostPassthroughMigratable'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <value>on</value>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <value>off</value>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </enum>
Jan 23 11:37:57 compute-0 nova_compute[184255]:     </mode>
Jan 23 11:37:57 compute-0 nova_compute[184255]:     <mode name='maximum' supported='yes'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <enum name='maximumMigratable'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <value>on</value>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <value>off</value>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </enum>
Jan 23 11:37:57 compute-0 nova_compute[184255]:     </mode>
Jan 23 11:37:57 compute-0 nova_compute[184255]:     <mode name='host-model' supported='yes'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <vendor>AMD</vendor>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <feature policy='require' name='x2apic'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <feature policy='require' name='tsc-deadline'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <feature policy='require' name='hypervisor'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <feature policy='require' name='tsc_adjust'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <feature policy='require' name='spec-ctrl'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <feature policy='require' name='stibp'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <feature policy='require' name='ssbd'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <feature policy='require' name='cmp_legacy'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <feature policy='require' name='overflow-recov'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <feature policy='require' name='succor'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <feature policy='require' name='ibrs'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <feature policy='require' name='amd-ssbd'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <feature policy='require' name='virt-ssbd'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <feature policy='require' name='lbrv'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <feature policy='require' name='tsc-scale'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <feature policy='require' name='vmcb-clean'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <feature policy='require' name='flushbyasid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <feature policy='require' name='pause-filter'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <feature policy='require' name='pfthreshold'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <feature policy='require' name='svme-addr-chk'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <feature policy='disable' name='xsaves'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:     </mode>
Jan 23 11:37:57 compute-0 nova_compute[184255]:     <mode name='custom' supported='yes'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='Broadwell'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='hle'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='rtm'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='Broadwell-IBRS'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='hle'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='rtm'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='Broadwell-noTSX'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='Broadwell-v1'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='hle'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='rtm'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='Broadwell-v2'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='Broadwell-v3'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='hle'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='rtm'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='Broadwell-v4'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='Cascadelake-Server'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vnni'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='hle'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='rtm'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vnni'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='ibrs-all'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='Cascadelake-Server-v1'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vnni'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='hle'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='rtm'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='Cascadelake-Server-v2'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vnni'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='hle'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='ibrs-all'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='rtm'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='Cascadelake-Server-v3'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vnni'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='ibrs-all'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='Cascadelake-Server-v4'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vnni'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='ibrs-all'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='Cascadelake-Server-v5'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vnni'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='ibrs-all'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='ClearwaterForest'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx-ifma'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx-ne-convert'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx-vnni'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx-vnni-int16'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx-vnni-int8'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='bhi-ctrl'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='bhi-no'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='bus-lock-detect'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='cldemote'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='cmpccxadd'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='ddpd-u'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='fbsdp-no'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='fsrm'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='fsrs'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='gfni'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='ibrs-all'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='intel-psfd'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='ipred-ctrl'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='lam'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='mcdt-no'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='movdir64b'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='movdiri'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pbrsb-no'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='prefetchiti'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='psdp-no'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='rrsba-ctrl'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='sbdr-ssdp-no'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='serialize'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='sha512'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='sm3'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='sm4'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='ss'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='vaes'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='vpclmulqdq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='ClearwaterForest-v1'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx-ifma'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx-ne-convert'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx-vnni'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx-vnni-int16'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx-vnni-int8'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='bhi-ctrl'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='bhi-no'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='bus-lock-detect'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='cldemote'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='cmpccxadd'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='ddpd-u'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='fbsdp-no'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='fsrm'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='fsrs'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='gfni'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='ibrs-all'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='intel-psfd'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='ipred-ctrl'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='lam'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='mcdt-no'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='movdir64b'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='movdiri'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pbrsb-no'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='prefetchiti'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='psdp-no'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='rrsba-ctrl'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='sbdr-ssdp-no'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='serialize'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='sha512'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='sm3'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='sm4'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='ss'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='vaes'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='vpclmulqdq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='Cooperlake'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512-bf16'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vnni'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='hle'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='ibrs-all'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='rtm'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='taa-no'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='Cooperlake-v1'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512-bf16'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vnni'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='hle'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='ibrs-all'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='rtm'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='taa-no'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='Cooperlake-v2'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512-bf16'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vnni'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='hle'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='ibrs-all'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='rtm'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='taa-no'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='Denverton'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='mpx'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='Denverton-v1'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='mpx'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='Denverton-v2'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='Denverton-v3'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='Dhyana-v2'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='EPYC-Genoa'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='amd-psfd'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='auto-ibrs'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512-bf16'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512bitalg'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512ifma'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vbmi'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vbmi2'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vnni'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='fsrm'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='gfni'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='la57'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='no-nested-data-bp'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='null-sel-clr-base'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='stibp-always-on'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='vaes'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='vpclmulqdq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='EPYC-Genoa-v1'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='amd-psfd'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='auto-ibrs'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512-bf16'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512bitalg'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512ifma'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vbmi'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vbmi2'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vnni'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='fsrm'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='gfni'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='la57'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='no-nested-data-bp'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='null-sel-clr-base'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='stibp-always-on'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='vaes'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='vpclmulqdq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='EPYC-Genoa-v2'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='amd-psfd'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='auto-ibrs'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512-bf16'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512bitalg'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512ifma'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vbmi'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vbmi2'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vnni'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='fs-gs-base-ns'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='fsrm'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='gfni'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='la57'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='no-nested-data-bp'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='null-sel-clr-base'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='perfmon-v2'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='stibp-always-on'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='vaes'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='vpclmulqdq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='EPYC-Milan'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='fsrm'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='EPYC-Milan-v1'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='fsrm'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='EPYC-Milan-v2'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='amd-psfd'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='fsrm'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='no-nested-data-bp'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='null-sel-clr-base'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='stibp-always-on'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='vaes'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='vpclmulqdq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='EPYC-Milan-v3'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='amd-psfd'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='fsrm'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='no-nested-data-bp'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='null-sel-clr-base'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='stibp-always-on'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='vaes'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='vpclmulqdq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='EPYC-Rome'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='EPYC-Rome-v1'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='EPYC-Rome-v2'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='EPYC-Rome-v3'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='EPYC-Turin'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='amd-psfd'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='auto-ibrs'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx-vnni'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512-bf16'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512-vp2intersect'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512bitalg'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512ifma'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vbmi'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vbmi2'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vnni'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='fs-gs-base-ns'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='fsrm'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='gfni'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='ibpb-brtype'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='la57'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='movdir64b'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='movdiri'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='no-nested-data-bp'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='null-sel-clr-base'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='perfmon-v2'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='prefetchi'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='sbpb'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='srso-user-kernel-no'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='stibp-always-on'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='vaes'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='vpclmulqdq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='EPYC-Turin-v1'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='amd-psfd'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='auto-ibrs'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx-vnni'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512-bf16'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512-vp2intersect'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512bitalg'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512ifma'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vbmi'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vbmi2'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vnni'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='fs-gs-base-ns'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='fsrm'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='gfni'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='ibpb-brtype'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='la57'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='movdir64b'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='movdiri'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='no-nested-data-bp'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='null-sel-clr-base'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='perfmon-v2'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='prefetchi'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='sbpb'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='srso-user-kernel-no'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='stibp-always-on'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='vaes'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='vpclmulqdq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='EPYC-v3'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='EPYC-v4'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='EPYC-v5'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='GraniteRapids'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='amx-bf16'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='amx-fp16'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='amx-int8'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='amx-tile'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx-vnni'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512-bf16'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512-fp16'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512bitalg'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512ifma'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vbmi'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vbmi2'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vnni'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='bus-lock-detect'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='fbsdp-no'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='fsrc'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='fsrm'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='fsrs'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='fzrm'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='gfni'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='hle'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='ibrs-all'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='la57'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='mcdt-no'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pbrsb-no'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='prefetchiti'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='psdp-no'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='rtm'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='sbdr-ssdp-no'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='serialize'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='taa-no'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='tsx-ldtrk'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='vaes'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='vpclmulqdq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='xfd'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='GraniteRapids-v1'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='amx-bf16'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='amx-fp16'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='amx-int8'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='amx-tile'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx-vnni'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512-bf16'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512-fp16'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512bitalg'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512ifma'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vbmi'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vbmi2'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vnni'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='bus-lock-detect'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='fbsdp-no'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='fsrc'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='fsrm'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='fsrs'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='fzrm'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='gfni'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='hle'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='ibrs-all'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='la57'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='mcdt-no'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pbrsb-no'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='prefetchiti'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='psdp-no'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='rtm'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='sbdr-ssdp-no'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='serialize'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='taa-no'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='tsx-ldtrk'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='vaes'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='vpclmulqdq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='xfd'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='GraniteRapids-v2'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='amx-bf16'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='amx-fp16'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='amx-int8'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='amx-tile'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx-vnni'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx10'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx10-128'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx10-256'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx10-512'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512-bf16'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512-fp16'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512bitalg'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512ifma'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vbmi'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vbmi2'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vnni'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='bus-lock-detect'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='cldemote'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='fbsdp-no'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='fsrc'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='fsrm'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='fsrs'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='fzrm'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='gfni'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='hle'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='ibrs-all'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='la57'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='mcdt-no'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='movdir64b'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='movdiri'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pbrsb-no'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='prefetchiti'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='psdp-no'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='rtm'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='sbdr-ssdp-no'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='serialize'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='ss'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='taa-no'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='tsx-ldtrk'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='vaes'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='vpclmulqdq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='xfd'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='GraniteRapids-v3'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='amx-bf16'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='amx-fp16'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='amx-int8'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='amx-tile'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx-vnni'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx10'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx10-128'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx10-256'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx10-512'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512-bf16'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512-fp16'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512bitalg'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512ifma'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vbmi'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vbmi2'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vnni'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='bus-lock-detect'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='cldemote'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='fbsdp-no'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='fsrc'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='fsrm'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='fsrs'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='fzrm'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='gfni'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='hle'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='ibrs-all'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='la57'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='mcdt-no'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='movdir64b'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='movdiri'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pbrsb-no'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='prefetchiti'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='psdp-no'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='rtm'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='sbdr-ssdp-no'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='serialize'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='ss'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='taa-no'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='tsx-ldtrk'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='vaes'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='vpclmulqdq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='xfd'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='Haswell'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='hle'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='rtm'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='Haswell-IBRS'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='hle'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='rtm'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='Haswell-noTSX'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='Haswell-v1'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='hle'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='rtm'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='Haswell-v2'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='Haswell-v3'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='hle'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='rtm'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='Haswell-v4'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='Icelake-Server'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512bitalg'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vbmi'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vbmi2'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vnni'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='gfni'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='hle'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='la57'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='rtm'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='vaes'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='vpclmulqdq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='Icelake-Server-noTSX'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512bitalg'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vbmi'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vbmi2'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vnni'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='gfni'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='la57'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='vaes'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='vpclmulqdq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='Icelake-Server-v1'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512bitalg'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vbmi'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vbmi2'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vnni'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='gfni'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='hle'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='la57'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='rtm'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='vaes'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='vpclmulqdq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='Icelake-Server-v2'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512bitalg'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vbmi'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vbmi2'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vnni'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='gfni'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='la57'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='vaes'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='vpclmulqdq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='Icelake-Server-v3'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512bitalg'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vbmi'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vbmi2'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vnni'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='gfni'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='ibrs-all'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='la57'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='taa-no'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='vaes'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='vpclmulqdq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='Icelake-Server-v4'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512bitalg'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512ifma'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vbmi'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vbmi2'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vnni'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='fsrm'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='gfni'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='ibrs-all'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='la57'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='taa-no'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='vaes'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='vpclmulqdq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='Icelake-Server-v5'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512bitalg'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512ifma'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vbmi'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vbmi2'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vnni'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='fsrm'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='gfni'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='ibrs-all'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='la57'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='taa-no'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='vaes'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='vpclmulqdq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='Icelake-Server-v6'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512bitalg'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512ifma'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vbmi'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vbmi2'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vnni'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='fsrm'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='gfni'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='ibrs-all'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='la57'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='taa-no'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='vaes'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='vpclmulqdq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='Icelake-Server-v7'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512bitalg'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512ifma'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vbmi'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vbmi2'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vnni'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='fsrm'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='gfni'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='hle'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='ibrs-all'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='la57'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='rtm'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='taa-no'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='vaes'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='vpclmulqdq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='IvyBridge'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='IvyBridge-IBRS'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='IvyBridge-v1'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='IvyBridge-v2'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='KnightsMill'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512-4fmaps'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512-4vnniw'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512er'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512pf'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='ss'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='KnightsMill-v1'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512-4fmaps'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512-4vnniw'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512er'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512pf'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='ss'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='Opteron_G4'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='fma4'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='xop'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='Opteron_G4-v1'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='fma4'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='xop'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='Opteron_G5'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='fma4'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='tbm'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='xop'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='Opteron_G5-v1'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='fma4'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='tbm'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='xop'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='SapphireRapids'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='amx-bf16'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='amx-int8'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='amx-tile'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx-vnni'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512-bf16'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512-fp16'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512bitalg'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512ifma'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vbmi'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vbmi2'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vnni'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='bus-lock-detect'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='fsrc'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='fsrm'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='fsrs'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='fzrm'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='gfni'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='hle'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='ibrs-all'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='la57'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='rtm'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='serialize'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='taa-no'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='tsx-ldtrk'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='vaes'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='vpclmulqdq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='xfd'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='SapphireRapids-v1'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='amx-bf16'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='amx-int8'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='amx-tile'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx-vnni'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512-bf16'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512-fp16'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512bitalg'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512ifma'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vbmi'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vbmi2'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vnni'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='bus-lock-detect'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='fsrc'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='fsrm'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='fsrs'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='fzrm'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='gfni'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='hle'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='ibrs-all'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='la57'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='rtm'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='serialize'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='taa-no'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='tsx-ldtrk'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='vaes'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='vpclmulqdq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='xfd'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='SapphireRapids-v2'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='amx-bf16'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='amx-int8'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='amx-tile'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx-vnni'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512-bf16'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512-fp16'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512bitalg'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512ifma'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vbmi'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vbmi2'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vnni'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='bus-lock-detect'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='fbsdp-no'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='fsrc'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='fsrm'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='fsrs'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='fzrm'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='gfni'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='hle'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='ibrs-all'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='la57'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='psdp-no'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='rtm'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='sbdr-ssdp-no'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='serialize'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='taa-no'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='tsx-ldtrk'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='vaes'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='vpclmulqdq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='xfd'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='SapphireRapids-v3'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='amx-bf16'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='amx-int8'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='amx-tile'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx-vnni'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512-bf16'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512-fp16'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512bitalg'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512ifma'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vbmi'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vbmi2'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vnni'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='bus-lock-detect'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='cldemote'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='fbsdp-no'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='fsrc'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='fsrm'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='fsrs'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='fzrm'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='gfni'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='hle'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='ibrs-all'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='la57'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='movdir64b'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='movdiri'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='psdp-no'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='rtm'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='sbdr-ssdp-no'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='serialize'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='ss'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='taa-no'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='tsx-ldtrk'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='vaes'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='vpclmulqdq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='xfd'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='SapphireRapids-v4'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='amx-bf16'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='amx-int8'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='amx-tile'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx-vnni'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512-bf16'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512-fp16'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512bitalg'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512ifma'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vbmi'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vbmi2'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vnni'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='bus-lock-detect'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='cldemote'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='fbsdp-no'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='fsrc'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='fsrm'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='fsrs'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='fzrm'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='gfni'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='hle'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='ibrs-all'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='la57'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='movdir64b'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='movdiri'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='psdp-no'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='rtm'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='sbdr-ssdp-no'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='serialize'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='ss'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='taa-no'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='tsx-ldtrk'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='vaes'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='vpclmulqdq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='xfd'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='SierraForest'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx-ifma'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx-ne-convert'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx-vnni'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx-vnni-int8'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='bus-lock-detect'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='cmpccxadd'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='fbsdp-no'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='fsrm'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='fsrs'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='gfni'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='ibrs-all'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='mcdt-no'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pbrsb-no'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='psdp-no'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='sbdr-ssdp-no'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='serialize'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='vaes'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='vpclmulqdq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='SierraForest-v1'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx-ifma'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx-ne-convert'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx-vnni'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx-vnni-int8'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='bus-lock-detect'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='cmpccxadd'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='fbsdp-no'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='fsrm'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='fsrs'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='gfni'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='ibrs-all'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='mcdt-no'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pbrsb-no'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='psdp-no'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='sbdr-ssdp-no'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='serialize'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='vaes'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='vpclmulqdq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='SierraForest-v2'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx-ifma'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx-ne-convert'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx-vnni'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx-vnni-int8'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='bhi-ctrl'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='bus-lock-detect'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='cldemote'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='cmpccxadd'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='fbsdp-no'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='fsrm'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='fsrs'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='gfni'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='ibrs-all'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='intel-psfd'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='ipred-ctrl'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='lam'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='mcdt-no'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='movdir64b'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='movdiri'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pbrsb-no'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='psdp-no'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='rrsba-ctrl'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='sbdr-ssdp-no'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='serialize'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='ss'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='vaes'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='vpclmulqdq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='SierraForest-v3'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx-ifma'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx-ne-convert'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx-vnni'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx-vnni-int8'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='bhi-ctrl'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='bus-lock-detect'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='cldemote'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='cmpccxadd'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='fbsdp-no'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='fsrm'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='fsrs'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='gfni'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='ibrs-all'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='intel-psfd'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='ipred-ctrl'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='lam'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='mcdt-no'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='movdir64b'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='movdiri'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pbrsb-no'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='psdp-no'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='rrsba-ctrl'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='sbdr-ssdp-no'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='serialize'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='ss'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='vaes'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='vpclmulqdq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='Skylake-Client'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='hle'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='rtm'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='Skylake-Client-IBRS'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='hle'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='rtm'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='Skylake-Client-v1'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='hle'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='rtm'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='Skylake-Client-v2'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='hle'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='rtm'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='Skylake-Client-v3'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='Skylake-Client-v4'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='Skylake-Server'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='hle'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='rtm'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='Skylake-Server-IBRS'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='hle'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='rtm'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='Skylake-Server-v1'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='hle'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='rtm'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='Skylake-Server-v2'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='hle'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='rtm'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='Skylake-Server-v3'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='Skylake-Server-v4'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:57 compute-0 sudo[185110]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gbvehncygxguiflzkmaetwzlfdivqrke ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168277.630788-1278-27157481446131/AnsiballZ_systemd.py'
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='Skylake-Server-v5'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='Snowridge'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='cldemote'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='core-capability'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='gfni'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='movdir64b'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='movdiri'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='mpx'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='split-lock-detect'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='Snowridge-v1'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='cldemote'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='core-capability'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='gfni'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='movdir64b'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='movdiri'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='mpx'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='split-lock-detect'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='Snowridge-v2'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='cldemote'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='core-capability'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='gfni'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='movdir64b'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='movdiri'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='split-lock-detect'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='Snowridge-v3'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='cldemote'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='core-capability'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='gfni'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='movdir64b'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='movdiri'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='split-lock-detect'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='Snowridge-v4'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='cldemote'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='gfni'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='movdir64b'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='movdiri'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 23 11:37:57 compute-0 sudo[185110]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='athlon'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='3dnow'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='3dnowext'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='athlon-v1'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='3dnow'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='3dnowext'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='core2duo'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='ss'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='core2duo-v1'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='ss'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='coreduo'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='ss'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='coreduo-v1'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='ss'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='n270'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='ss'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='n270-v1'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='ss'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='phenom'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='3dnow'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='3dnowext'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='phenom-v1'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='3dnow'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='3dnowext'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:     </mode>
Jan 23 11:37:57 compute-0 nova_compute[184255]:   </cpu>
Jan 23 11:37:57 compute-0 nova_compute[184255]:   <memoryBacking supported='yes'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:     <enum name='sourceType'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <value>file</value>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <value>anonymous</value>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <value>memfd</value>
Jan 23 11:37:57 compute-0 nova_compute[184255]:     </enum>
Jan 23 11:37:57 compute-0 nova_compute[184255]:   </memoryBacking>
Jan 23 11:37:57 compute-0 nova_compute[184255]:   <devices>
Jan 23 11:37:57 compute-0 nova_compute[184255]:     <disk supported='yes'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <enum name='diskDevice'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <value>disk</value>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <value>cdrom</value>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <value>floppy</value>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <value>lun</value>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </enum>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <enum name='bus'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <value>ide</value>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <value>fdc</value>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <value>scsi</value>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <value>virtio</value>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <value>usb</value>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <value>sata</value>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </enum>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <enum name='model'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <value>virtio</value>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <value>virtio-transitional</value>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <value>virtio-non-transitional</value>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </enum>
Jan 23 11:37:57 compute-0 nova_compute[184255]:     </disk>
Jan 23 11:37:57 compute-0 nova_compute[184255]:     <graphics supported='yes'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <enum name='type'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <value>vnc</value>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <value>egl-headless</value>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <value>dbus</value>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </enum>
Jan 23 11:37:57 compute-0 nova_compute[184255]:     </graphics>
Jan 23 11:37:57 compute-0 nova_compute[184255]:     <video supported='yes'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <enum name='modelType'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <value>vga</value>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <value>cirrus</value>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <value>virtio</value>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <value>none</value>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <value>bochs</value>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <value>ramfb</value>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </enum>
Jan 23 11:37:57 compute-0 nova_compute[184255]:     </video>
Jan 23 11:37:57 compute-0 nova_compute[184255]:     <hostdev supported='yes'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <enum name='mode'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <value>subsystem</value>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </enum>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <enum name='startupPolicy'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <value>default</value>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <value>mandatory</value>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <value>requisite</value>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <value>optional</value>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </enum>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <enum name='subsysType'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <value>usb</value>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <value>pci</value>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <value>scsi</value>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </enum>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <enum name='capsType'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <enum name='pciBackend'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:     </hostdev>
Jan 23 11:37:57 compute-0 nova_compute[184255]:     <rng supported='yes'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <enum name='model'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <value>virtio</value>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <value>virtio-transitional</value>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <value>virtio-non-transitional</value>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </enum>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <enum name='backendModel'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <value>random</value>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <value>egd</value>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <value>builtin</value>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </enum>
Jan 23 11:37:57 compute-0 nova_compute[184255]:     </rng>
Jan 23 11:37:57 compute-0 nova_compute[184255]:     <filesystem supported='yes'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <enum name='driverType'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <value>path</value>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <value>handle</value>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <value>virtiofs</value>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </enum>
Jan 23 11:37:57 compute-0 nova_compute[184255]:     </filesystem>
Jan 23 11:37:57 compute-0 nova_compute[184255]:     <tpm supported='yes'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <enum name='model'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <value>tpm-tis</value>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <value>tpm-crb</value>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </enum>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <enum name='backendModel'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <value>emulator</value>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <value>external</value>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </enum>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <enum name='backendVersion'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <value>2.0</value>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </enum>
Jan 23 11:37:57 compute-0 nova_compute[184255]:     </tpm>
Jan 23 11:37:57 compute-0 nova_compute[184255]:     <redirdev supported='yes'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <enum name='bus'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <value>usb</value>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </enum>
Jan 23 11:37:57 compute-0 nova_compute[184255]:     </redirdev>
Jan 23 11:37:57 compute-0 nova_compute[184255]:     <channel supported='yes'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <enum name='type'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <value>pty</value>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <value>unix</value>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </enum>
Jan 23 11:37:57 compute-0 nova_compute[184255]:     </channel>
Jan 23 11:37:57 compute-0 nova_compute[184255]:     <crypto supported='yes'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <enum name='model'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <enum name='type'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <value>qemu</value>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </enum>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <enum name='backendModel'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <value>builtin</value>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </enum>
Jan 23 11:37:57 compute-0 nova_compute[184255]:     </crypto>
Jan 23 11:37:57 compute-0 nova_compute[184255]:     <interface supported='yes'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <enum name='backendType'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <value>default</value>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <value>passt</value>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </enum>
Jan 23 11:37:57 compute-0 nova_compute[184255]:     </interface>
Jan 23 11:37:57 compute-0 nova_compute[184255]:     <panic supported='yes'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <enum name='model'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <value>isa</value>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <value>hyperv</value>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </enum>
Jan 23 11:37:57 compute-0 nova_compute[184255]:     </panic>
Jan 23 11:37:57 compute-0 nova_compute[184255]:     <console supported='yes'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <enum name='type'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <value>null</value>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <value>vc</value>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <value>pty</value>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <value>dev</value>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <value>file</value>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <value>pipe</value>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <value>stdio</value>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <value>udp</value>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <value>tcp</value>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <value>unix</value>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <value>qemu-vdagent</value>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <value>dbus</value>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </enum>
Jan 23 11:37:57 compute-0 nova_compute[184255]:     </console>
Jan 23 11:37:57 compute-0 nova_compute[184255]:   </devices>
Jan 23 11:37:57 compute-0 nova_compute[184255]:   <features>
Jan 23 11:37:57 compute-0 nova_compute[184255]:     <gic supported='no'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:     <vmcoreinfo supported='yes'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:     <genid supported='yes'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:     <backingStoreInput supported='yes'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:     <backup supported='yes'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:     <async-teardown supported='yes'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:     <s390-pv supported='no'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:     <ps2 supported='yes'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:     <tdx supported='no'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:     <sev supported='no'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:     <sgx supported='no'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:     <hyperv supported='yes'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <enum name='features'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <value>relaxed</value>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <value>vapic</value>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <value>spinlocks</value>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <value>vpindex</value>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <value>runtime</value>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <value>synic</value>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <value>stimer</value>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <value>reset</value>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <value>vendor_id</value>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <value>frequencies</value>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <value>reenlightenment</value>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <value>tlbflush</value>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <value>ipi</value>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <value>avic</value>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <value>emsr_bitmap</value>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <value>xmm_input</value>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </enum>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <defaults>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <spinlocks>4095</spinlocks>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <stimer_direct>on</stimer_direct>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <tlbflush_direct>on</tlbflush_direct>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <tlbflush_extended>on</tlbflush_extended>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </defaults>
Jan 23 11:37:57 compute-0 nova_compute[184255]:     </hyperv>
Jan 23 11:37:57 compute-0 nova_compute[184255]:     <launchSecurity supported='no'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:   </features>
Jan 23 11:37:57 compute-0 nova_compute[184255]: </domainCapabilities>
Jan 23 11:37:57 compute-0 nova_compute[184255]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 23 11:37:57 compute-0 nova_compute[184255]: 2026-01-23 11:37:57.902 184259 DEBUG nova.virt.libvirt.host [None req-35674d47-012b-4158-9100-0041287f554b - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Jan 23 11:37:57 compute-0 nova_compute[184255]: <domainCapabilities>
Jan 23 11:37:57 compute-0 nova_compute[184255]:   <path>/usr/libexec/qemu-kvm</path>
Jan 23 11:37:57 compute-0 nova_compute[184255]:   <domain>kvm</domain>
Jan 23 11:37:57 compute-0 nova_compute[184255]:   <machine>pc-q35-rhel9.8.0</machine>
Jan 23 11:37:57 compute-0 nova_compute[184255]:   <arch>i686</arch>
Jan 23 11:37:57 compute-0 nova_compute[184255]:   <vcpu max='4096'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:   <iothreads supported='yes'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:   <os supported='yes'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:     <enum name='firmware'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:     <loader supported='yes'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <enum name='type'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <value>rom</value>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <value>pflash</value>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </enum>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <enum name='readonly'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <value>yes</value>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <value>no</value>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </enum>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <enum name='secure'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <value>no</value>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </enum>
Jan 23 11:37:57 compute-0 nova_compute[184255]:     </loader>
Jan 23 11:37:57 compute-0 nova_compute[184255]:   </os>
Jan 23 11:37:57 compute-0 nova_compute[184255]:   <cpu>
Jan 23 11:37:57 compute-0 nova_compute[184255]:     <mode name='host-passthrough' supported='yes'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <enum name='hostPassthroughMigratable'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <value>on</value>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <value>off</value>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </enum>
Jan 23 11:37:57 compute-0 nova_compute[184255]:     </mode>
Jan 23 11:37:57 compute-0 nova_compute[184255]:     <mode name='maximum' supported='yes'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <enum name='maximumMigratable'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <value>on</value>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <value>off</value>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </enum>
Jan 23 11:37:57 compute-0 nova_compute[184255]:     </mode>
Jan 23 11:37:57 compute-0 nova_compute[184255]:     <mode name='host-model' supported='yes'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <vendor>AMD</vendor>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <feature policy='require' name='x2apic'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <feature policy='require' name='tsc-deadline'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <feature policy='require' name='hypervisor'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <feature policy='require' name='tsc_adjust'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <feature policy='require' name='spec-ctrl'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <feature policy='require' name='stibp'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <feature policy='require' name='ssbd'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <feature policy='require' name='cmp_legacy'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <feature policy='require' name='overflow-recov'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <feature policy='require' name='succor'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <feature policy='require' name='ibrs'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <feature policy='require' name='amd-ssbd'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <feature policy='require' name='virt-ssbd'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <feature policy='require' name='lbrv'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <feature policy='require' name='tsc-scale'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <feature policy='require' name='vmcb-clean'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <feature policy='require' name='flushbyasid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <feature policy='require' name='pause-filter'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <feature policy='require' name='pfthreshold'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <feature policy='require' name='svme-addr-chk'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <feature policy='disable' name='xsaves'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:     </mode>
Jan 23 11:37:57 compute-0 nova_compute[184255]:     <mode name='custom' supported='yes'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='Broadwell'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='hle'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='rtm'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='Broadwell-IBRS'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='hle'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='rtm'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='Broadwell-noTSX'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='Broadwell-v1'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='hle'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='rtm'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='Broadwell-v2'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='Broadwell-v3'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='hle'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='rtm'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='Broadwell-v4'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='Cascadelake-Server'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vnni'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='hle'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='rtm'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vnni'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='ibrs-all'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='Cascadelake-Server-v1'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vnni'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='hle'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='rtm'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='Cascadelake-Server-v2'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vnni'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='hle'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='ibrs-all'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='rtm'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='Cascadelake-Server-v3'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vnni'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='ibrs-all'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='Cascadelake-Server-v4'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vnni'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='ibrs-all'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='Cascadelake-Server-v5'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vnni'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='ibrs-all'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='ClearwaterForest'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx-ifma'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx-ne-convert'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx-vnni'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx-vnni-int16'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx-vnni-int8'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='bhi-ctrl'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='bhi-no'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='bus-lock-detect'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='cldemote'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='cmpccxadd'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='ddpd-u'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='fbsdp-no'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='fsrm'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='fsrs'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='gfni'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='ibrs-all'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='intel-psfd'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='ipred-ctrl'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='lam'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='mcdt-no'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='movdir64b'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='movdiri'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pbrsb-no'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='prefetchiti'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='psdp-no'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='rrsba-ctrl'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='sbdr-ssdp-no'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='serialize'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='sha512'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='sm3'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='sm4'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='ss'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='vaes'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='vpclmulqdq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='ClearwaterForest-v1'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx-ifma'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx-ne-convert'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx-vnni'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx-vnni-int16'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx-vnni-int8'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='bhi-ctrl'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='bhi-no'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='bus-lock-detect'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='cldemote'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='cmpccxadd'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='ddpd-u'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='fbsdp-no'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='fsrm'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='fsrs'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='gfni'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='ibrs-all'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='intel-psfd'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='ipred-ctrl'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='lam'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='mcdt-no'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='movdir64b'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='movdiri'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pbrsb-no'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='prefetchiti'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='psdp-no'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='rrsba-ctrl'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='sbdr-ssdp-no'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='serialize'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='sha512'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='sm3'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='sm4'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='ss'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='vaes'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='vpclmulqdq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='Cooperlake'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512-bf16'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vnni'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='hle'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='ibrs-all'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='rtm'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='taa-no'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='Cooperlake-v1'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512-bf16'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vnni'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='hle'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='ibrs-all'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='rtm'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='taa-no'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='Cooperlake-v2'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512-bf16'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vnni'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='hle'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='ibrs-all'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='rtm'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='taa-no'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='Denverton'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='mpx'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='Denverton-v1'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='mpx'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='Denverton-v2'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='Denverton-v3'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='Dhyana-v2'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='EPYC-Genoa'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='amd-psfd'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='auto-ibrs'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512-bf16'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512bitalg'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512ifma'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vbmi'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vbmi2'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vnni'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='fsrm'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='gfni'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='la57'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='no-nested-data-bp'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='null-sel-clr-base'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='stibp-always-on'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='vaes'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='vpclmulqdq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='EPYC-Genoa-v1'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='amd-psfd'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='auto-ibrs'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512-bf16'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512bitalg'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512ifma'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vbmi'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vbmi2'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vnni'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='fsrm'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='gfni'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='la57'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='no-nested-data-bp'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='null-sel-clr-base'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='stibp-always-on'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='vaes'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='vpclmulqdq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='EPYC-Genoa-v2'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='amd-psfd'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='auto-ibrs'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512-bf16'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512bitalg'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512ifma'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vbmi'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vbmi2'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vnni'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='fs-gs-base-ns'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='fsrm'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='gfni'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='la57'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='no-nested-data-bp'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='null-sel-clr-base'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='perfmon-v2'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='stibp-always-on'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='vaes'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='vpclmulqdq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='EPYC-Milan'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='fsrm'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='EPYC-Milan-v1'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='fsrm'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='EPYC-Milan-v2'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='amd-psfd'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='fsrm'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='no-nested-data-bp'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='null-sel-clr-base'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='stibp-always-on'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='vaes'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='vpclmulqdq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='EPYC-Milan-v3'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='amd-psfd'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='fsrm'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='no-nested-data-bp'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='null-sel-clr-base'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='stibp-always-on'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='vaes'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='vpclmulqdq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='EPYC-Rome'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='EPYC-Rome-v1'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='EPYC-Rome-v2'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='EPYC-Rome-v3'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='EPYC-Turin'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='amd-psfd'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='auto-ibrs'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx-vnni'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512-bf16'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512-vp2intersect'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512bitalg'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512ifma'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vbmi'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vbmi2'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vnni'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='fs-gs-base-ns'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='fsrm'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='gfni'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='ibpb-brtype'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='la57'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='movdir64b'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='movdiri'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='no-nested-data-bp'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='null-sel-clr-base'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='perfmon-v2'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='prefetchi'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='sbpb'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='srso-user-kernel-no'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='stibp-always-on'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='vaes'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='vpclmulqdq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='EPYC-Turin-v1'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='amd-psfd'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='auto-ibrs'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx-vnni'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512-bf16'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512-vp2intersect'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512bitalg'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512ifma'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vbmi'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vbmi2'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vnni'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='fs-gs-base-ns'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='fsrm'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='gfni'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='ibpb-brtype'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='la57'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='movdir64b'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='movdiri'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='no-nested-data-bp'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='null-sel-clr-base'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='perfmon-v2'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='prefetchi'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='sbpb'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='srso-user-kernel-no'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='stibp-always-on'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='vaes'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='vpclmulqdq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='EPYC-v3'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='EPYC-v4'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='EPYC-v5'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='GraniteRapids'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='amx-bf16'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='amx-fp16'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='amx-int8'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='amx-tile'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx-vnni'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512-bf16'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512-fp16'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512bitalg'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512ifma'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vbmi'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vbmi2'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vnni'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='bus-lock-detect'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='fbsdp-no'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='fsrc'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='fsrm'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='fsrs'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='fzrm'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='gfni'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='hle'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='ibrs-all'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='la57'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='mcdt-no'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pbrsb-no'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='prefetchiti'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='psdp-no'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='rtm'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='sbdr-ssdp-no'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='serialize'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='taa-no'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='tsx-ldtrk'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='vaes'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='vpclmulqdq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='xfd'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='GraniteRapids-v1'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='amx-bf16'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='amx-fp16'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='amx-int8'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='amx-tile'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx-vnni'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512-bf16'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512-fp16'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512bitalg'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512ifma'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vbmi'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vbmi2'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vnni'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='bus-lock-detect'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='fbsdp-no'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='fsrc'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='fsrm'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='fsrs'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='fzrm'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='gfni'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='hle'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='ibrs-all'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='la57'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='mcdt-no'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pbrsb-no'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='prefetchiti'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='psdp-no'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='rtm'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='sbdr-ssdp-no'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='serialize'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='taa-no'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='tsx-ldtrk'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='vaes'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='vpclmulqdq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='xfd'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='GraniteRapids-v2'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='amx-bf16'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='amx-fp16'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='amx-int8'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='amx-tile'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx-vnni'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx10'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx10-128'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx10-256'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx10-512'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512-bf16'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512-fp16'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512bitalg'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512ifma'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vbmi'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vbmi2'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vnni'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='bus-lock-detect'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='cldemote'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='fbsdp-no'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='fsrc'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='fsrm'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='fsrs'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='fzrm'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='gfni'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='hle'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='ibrs-all'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='la57'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='mcdt-no'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='movdir64b'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='movdiri'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pbrsb-no'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='prefetchiti'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='psdp-no'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='rtm'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='sbdr-ssdp-no'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='serialize'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='ss'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='taa-no'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='tsx-ldtrk'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='vaes'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='vpclmulqdq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='xfd'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='GraniteRapids-v3'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='amx-bf16'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='amx-fp16'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='amx-int8'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='amx-tile'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx-vnni'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx10'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx10-128'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx10-256'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx10-512'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512-bf16'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512-fp16'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512bitalg'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512ifma'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vbmi'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vbmi2'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vnni'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='bus-lock-detect'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='cldemote'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='fbsdp-no'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='fsrc'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='fsrm'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='fsrs'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='fzrm'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='gfni'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='hle'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='ibrs-all'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='la57'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='mcdt-no'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='movdir64b'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='movdiri'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pbrsb-no'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='prefetchiti'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='psdp-no'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='rtm'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='sbdr-ssdp-no'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='serialize'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='ss'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='taa-no'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='tsx-ldtrk'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='vaes'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='vpclmulqdq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='xfd'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='Haswell'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='hle'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='rtm'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='Haswell-IBRS'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='hle'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='rtm'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='Haswell-noTSX'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='Haswell-v1'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='hle'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='rtm'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='Haswell-v2'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='Haswell-v3'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='hle'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='rtm'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='Haswell-v4'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='Icelake-Server'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512bitalg'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vbmi'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vbmi2'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vnni'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='gfni'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='hle'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='la57'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='rtm'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='vaes'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='vpclmulqdq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='Icelake-Server-noTSX'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512bitalg'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vbmi'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vbmi2'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vnni'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='gfni'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='la57'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='vaes'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='vpclmulqdq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='Icelake-Server-v1'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512bitalg'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vbmi'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vbmi2'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vnni'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='gfni'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='hle'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='la57'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='rtm'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='vaes'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='vpclmulqdq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='Icelake-Server-v2'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512bitalg'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vbmi'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vbmi2'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vnni'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='gfni'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='la57'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='vaes'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='vpclmulqdq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='Icelake-Server-v3'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512bitalg'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vbmi'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vbmi2'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vnni'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='gfni'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='ibrs-all'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='la57'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='taa-no'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='vaes'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='vpclmulqdq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='Icelake-Server-v4'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512bitalg'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512ifma'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vbmi'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vbmi2'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vnni'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='fsrm'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='gfni'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='ibrs-all'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='la57'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='taa-no'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='vaes'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='vpclmulqdq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='Icelake-Server-v5'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512bitalg'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512ifma'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vbmi'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vbmi2'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vnni'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='fsrm'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='gfni'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='ibrs-all'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='la57'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='taa-no'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='vaes'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='vpclmulqdq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 23 11:37:57 compute-0 nova_compute[184255]:       <blockers model='Icelake-Server-v6'>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512bitalg'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512ifma'/>
Jan 23 11:37:57 compute-0 nova_compute[184255]:         <feature name='avx512vbmi'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vbmi2'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vnni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fsrm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='gfni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='ibrs-all'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='la57'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='taa-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='vaes'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='vpclmulqdq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Icelake-Server-v7'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bitalg'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512ifma'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vbmi'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vbmi2'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vnni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fsrm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='gfni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='hle'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='ibrs-all'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='la57'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='rtm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='taa-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='vaes'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='vpclmulqdq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='IvyBridge'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='IvyBridge-IBRS'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='IvyBridge-v1'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='IvyBridge-v2'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='KnightsMill'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512-4fmaps'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512-4vnniw'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512er'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512pf'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='ss'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='KnightsMill-v1'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512-4fmaps'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512-4vnniw'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512er'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512pf'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='ss'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Opteron_G4'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fma4'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='xop'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Opteron_G4-v1'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fma4'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='xop'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Opteron_G5'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fma4'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='tbm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='xop'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Opteron_G5-v1'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fma4'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='tbm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='xop'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='SapphireRapids'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='amx-bf16'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='amx-int8'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='amx-tile'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx-vnni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512-bf16'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512-fp16'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bitalg'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512ifma'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vbmi'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vbmi2'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vnni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='bus-lock-detect'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fsrc'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fsrm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fsrs'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fzrm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='gfni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='hle'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='ibrs-all'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='la57'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='rtm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='serialize'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='taa-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='tsx-ldtrk'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='vaes'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='vpclmulqdq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='xfd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='SapphireRapids-v1'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='amx-bf16'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='amx-int8'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='amx-tile'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx-vnni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512-bf16'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512-fp16'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bitalg'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512ifma'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vbmi'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vbmi2'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vnni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='bus-lock-detect'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fsrc'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fsrm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fsrs'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fzrm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='gfni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='hle'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='ibrs-all'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='la57'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='rtm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='serialize'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='taa-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='tsx-ldtrk'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='vaes'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='vpclmulqdq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='xfd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='SapphireRapids-v2'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='amx-bf16'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='amx-int8'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='amx-tile'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx-vnni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512-bf16'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512-fp16'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bitalg'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512ifma'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vbmi'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vbmi2'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vnni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='bus-lock-detect'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fbsdp-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fsrc'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fsrm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fsrs'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fzrm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='gfni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='hle'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='ibrs-all'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='la57'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='psdp-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='rtm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='sbdr-ssdp-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='serialize'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='taa-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='tsx-ldtrk'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='vaes'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='vpclmulqdq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='xfd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='SapphireRapids-v3'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='amx-bf16'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='amx-int8'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='amx-tile'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx-vnni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512-bf16'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512-fp16'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bitalg'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512ifma'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vbmi'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vbmi2'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vnni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='bus-lock-detect'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='cldemote'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fbsdp-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fsrc'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fsrm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fsrs'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fzrm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='gfni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='hle'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='ibrs-all'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='la57'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='movdir64b'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='movdiri'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='psdp-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='rtm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='sbdr-ssdp-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='serialize'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='ss'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='taa-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='tsx-ldtrk'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='vaes'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='vpclmulqdq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='xfd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='SapphireRapids-v4'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='amx-bf16'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='amx-int8'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='amx-tile'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx-vnni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512-bf16'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512-fp16'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bitalg'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512ifma'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vbmi'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vbmi2'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vnni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='bus-lock-detect'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='cldemote'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fbsdp-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fsrc'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fsrm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fsrs'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fzrm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='gfni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='hle'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='ibrs-all'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='la57'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='movdir64b'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='movdiri'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='psdp-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='rtm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='sbdr-ssdp-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='serialize'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='ss'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='taa-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='tsx-ldtrk'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='vaes'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='vpclmulqdq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='xfd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='SierraForest'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx-ifma'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx-ne-convert'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx-vnni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx-vnni-int8'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='bus-lock-detect'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='cmpccxadd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fbsdp-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fsrm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fsrs'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='gfni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='ibrs-all'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='mcdt-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pbrsb-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='psdp-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='sbdr-ssdp-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='serialize'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='vaes'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='vpclmulqdq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='SierraForest-v1'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx-ifma'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx-ne-convert'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx-vnni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx-vnni-int8'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='bus-lock-detect'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='cmpccxadd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fbsdp-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fsrm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fsrs'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='gfni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='ibrs-all'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='mcdt-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pbrsb-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='psdp-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='sbdr-ssdp-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='serialize'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='vaes'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='vpclmulqdq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='SierraForest-v2'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx-ifma'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx-ne-convert'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx-vnni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx-vnni-int8'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='bhi-ctrl'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='bus-lock-detect'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='cldemote'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='cmpccxadd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fbsdp-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fsrm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fsrs'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='gfni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='ibrs-all'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='intel-psfd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='ipred-ctrl'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='lam'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='mcdt-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='movdir64b'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='movdiri'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pbrsb-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='psdp-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='rrsba-ctrl'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='sbdr-ssdp-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='serialize'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='ss'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='vaes'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='vpclmulqdq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='SierraForest-v3'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx-ifma'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx-ne-convert'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx-vnni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx-vnni-int8'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='bhi-ctrl'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='bus-lock-detect'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='cldemote'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='cmpccxadd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fbsdp-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fsrm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fsrs'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='gfni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='ibrs-all'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='intel-psfd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='ipred-ctrl'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='lam'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='mcdt-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='movdir64b'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='movdiri'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pbrsb-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='psdp-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='rrsba-ctrl'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='sbdr-ssdp-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='serialize'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='ss'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='vaes'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='vpclmulqdq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Skylake-Client'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='hle'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='rtm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Skylake-Client-IBRS'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='hle'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='rtm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Skylake-Client-v1'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='hle'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='rtm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Skylake-Client-v2'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='hle'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='rtm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Skylake-Client-v3'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Skylake-Client-v4'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Skylake-Server'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='hle'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='rtm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Skylake-Server-IBRS'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='hle'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='rtm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Skylake-Server-v1'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='hle'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='rtm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Skylake-Server-v2'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='hle'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='rtm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Skylake-Server-v3'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Skylake-Server-v4'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Skylake-Server-v5'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Snowridge'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='cldemote'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='core-capability'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='gfni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='movdir64b'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='movdiri'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='mpx'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='split-lock-detect'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Snowridge-v1'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='cldemote'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='core-capability'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='gfni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='movdir64b'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='movdiri'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='mpx'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='split-lock-detect'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Snowridge-v2'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='cldemote'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='core-capability'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='gfni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='movdir64b'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='movdiri'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='split-lock-detect'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Snowridge-v3'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='cldemote'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='core-capability'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='gfni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='movdir64b'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='movdiri'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='split-lock-detect'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Snowridge-v4'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='cldemote'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='gfni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='movdir64b'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='movdiri'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='athlon'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='3dnow'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='3dnowext'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='athlon-v1'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='3dnow'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='3dnowext'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='core2duo'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='ss'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='core2duo-v1'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='ss'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='coreduo'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='ss'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='coreduo-v1'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='ss'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='n270'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='ss'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='n270-v1'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='ss'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='phenom'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='3dnow'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='3dnowext'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='phenom-v1'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='3dnow'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='3dnowext'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     </mode>
Jan 23 11:37:58 compute-0 nova_compute[184255]:   </cpu>
Jan 23 11:37:58 compute-0 nova_compute[184255]:   <memoryBacking supported='yes'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     <enum name='sourceType'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <value>file</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <value>anonymous</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <value>memfd</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     </enum>
Jan 23 11:37:58 compute-0 nova_compute[184255]:   </memoryBacking>
Jan 23 11:37:58 compute-0 nova_compute[184255]:   <devices>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     <disk supported='yes'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <enum name='diskDevice'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>disk</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>cdrom</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>floppy</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>lun</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </enum>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <enum name='bus'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>fdc</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>scsi</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>virtio</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>usb</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>sata</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </enum>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <enum name='model'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>virtio</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>virtio-transitional</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>virtio-non-transitional</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </enum>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     </disk>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     <graphics supported='yes'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <enum name='type'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>vnc</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>egl-headless</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>dbus</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </enum>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     </graphics>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     <video supported='yes'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <enum name='modelType'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>vga</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>cirrus</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>virtio</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>none</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>bochs</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>ramfb</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </enum>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     </video>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     <hostdev supported='yes'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <enum name='mode'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>subsystem</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </enum>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <enum name='startupPolicy'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>default</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>mandatory</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>requisite</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>optional</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </enum>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <enum name='subsysType'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>usb</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>pci</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>scsi</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </enum>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <enum name='capsType'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <enum name='pciBackend'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     </hostdev>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     <rng supported='yes'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <enum name='model'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>virtio</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>virtio-transitional</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>virtio-non-transitional</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </enum>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <enum name='backendModel'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>random</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>egd</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>builtin</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </enum>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     </rng>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     <filesystem supported='yes'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <enum name='driverType'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>path</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>handle</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>virtiofs</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </enum>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     </filesystem>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     <tpm supported='yes'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <enum name='model'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>tpm-tis</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>tpm-crb</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </enum>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <enum name='backendModel'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>emulator</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>external</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </enum>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <enum name='backendVersion'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>2.0</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </enum>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     </tpm>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     <redirdev supported='yes'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <enum name='bus'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>usb</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </enum>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     </redirdev>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     <channel supported='yes'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <enum name='type'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>pty</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>unix</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </enum>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     </channel>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     <crypto supported='yes'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <enum name='model'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <enum name='type'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>qemu</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </enum>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <enum name='backendModel'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>builtin</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </enum>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     </crypto>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     <interface supported='yes'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <enum name='backendType'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>default</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>passt</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </enum>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     </interface>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     <panic supported='yes'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <enum name='model'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>isa</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>hyperv</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </enum>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     </panic>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     <console supported='yes'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <enum name='type'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>null</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>vc</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>pty</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>dev</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>file</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>pipe</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>stdio</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>udp</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>tcp</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>unix</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>qemu-vdagent</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>dbus</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </enum>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     </console>
Jan 23 11:37:58 compute-0 nova_compute[184255]:   </devices>
Jan 23 11:37:58 compute-0 nova_compute[184255]:   <features>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     <gic supported='no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     <vmcoreinfo supported='yes'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     <genid supported='yes'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     <backingStoreInput supported='yes'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     <backup supported='yes'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     <async-teardown supported='yes'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     <s390-pv supported='no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     <ps2 supported='yes'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     <tdx supported='no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     <sev supported='no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     <sgx supported='no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     <hyperv supported='yes'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <enum name='features'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>relaxed</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>vapic</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>spinlocks</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>vpindex</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>runtime</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>synic</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>stimer</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>reset</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>vendor_id</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>frequencies</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>reenlightenment</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>tlbflush</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>ipi</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>avic</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>emsr_bitmap</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>xmm_input</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </enum>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <defaults>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <spinlocks>4095</spinlocks>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <stimer_direct>on</stimer_direct>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <tlbflush_direct>on</tlbflush_direct>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <tlbflush_extended>on</tlbflush_extended>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </defaults>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     </hyperv>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     <launchSecurity supported='no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:   </features>
Jan 23 11:37:58 compute-0 nova_compute[184255]: </domainCapabilities>
Jan 23 11:37:58 compute-0 nova_compute[184255]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 23 11:37:58 compute-0 nova_compute[184255]: 2026-01-23 11:37:57.962 184259 DEBUG nova.virt.libvirt.host [None req-35674d47-012b-4158-9100-0041287f554b - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Jan 23 11:37:58 compute-0 nova_compute[184255]: 2026-01-23 11:37:57.968 184259 DEBUG nova.virt.libvirt.host [None req-35674d47-012b-4158-9100-0041287f554b - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Jan 23 11:37:58 compute-0 nova_compute[184255]: <domainCapabilities>
Jan 23 11:37:58 compute-0 nova_compute[184255]:   <path>/usr/libexec/qemu-kvm</path>
Jan 23 11:37:58 compute-0 nova_compute[184255]:   <domain>kvm</domain>
Jan 23 11:37:58 compute-0 nova_compute[184255]:   <machine>pc-i440fx-rhel7.6.0</machine>
Jan 23 11:37:58 compute-0 nova_compute[184255]:   <arch>x86_64</arch>
Jan 23 11:37:58 compute-0 nova_compute[184255]:   <vcpu max='240'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:   <iothreads supported='yes'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:   <os supported='yes'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     <enum name='firmware'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     <loader supported='yes'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <enum name='type'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>rom</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>pflash</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </enum>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <enum name='readonly'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>yes</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>no</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </enum>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <enum name='secure'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>no</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </enum>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     </loader>
Jan 23 11:37:58 compute-0 nova_compute[184255]:   </os>
Jan 23 11:37:58 compute-0 nova_compute[184255]:   <cpu>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     <mode name='host-passthrough' supported='yes'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <enum name='hostPassthroughMigratable'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>on</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>off</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </enum>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     </mode>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     <mode name='maximum' supported='yes'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <enum name='maximumMigratable'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>on</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>off</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </enum>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     </mode>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     <mode name='host-model' supported='yes'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <vendor>AMD</vendor>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <feature policy='require' name='x2apic'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <feature policy='require' name='tsc-deadline'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <feature policy='require' name='hypervisor'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <feature policy='require' name='tsc_adjust'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <feature policy='require' name='spec-ctrl'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <feature policy='require' name='stibp'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <feature policy='require' name='ssbd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <feature policy='require' name='cmp_legacy'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <feature policy='require' name='overflow-recov'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <feature policy='require' name='succor'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <feature policy='require' name='ibrs'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <feature policy='require' name='amd-ssbd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <feature policy='require' name='virt-ssbd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <feature policy='require' name='lbrv'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <feature policy='require' name='tsc-scale'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <feature policy='require' name='vmcb-clean'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <feature policy='require' name='flushbyasid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <feature policy='require' name='pause-filter'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <feature policy='require' name='pfthreshold'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <feature policy='require' name='svme-addr-chk'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <feature policy='disable' name='xsaves'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     </mode>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     <mode name='custom' supported='yes'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Broadwell'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='hle'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='rtm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Broadwell-IBRS'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='hle'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='rtm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Broadwell-noTSX'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Broadwell-v1'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='hle'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='rtm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Broadwell-v2'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Broadwell-v3'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='hle'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='rtm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Broadwell-v4'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Cascadelake-Server'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vnni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='hle'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='rtm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vnni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='ibrs-all'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Cascadelake-Server-v1'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vnni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='hle'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='rtm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Cascadelake-Server-v2'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vnni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='hle'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='ibrs-all'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='rtm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Cascadelake-Server-v3'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vnni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='ibrs-all'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Cascadelake-Server-v4'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vnni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='ibrs-all'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Cascadelake-Server-v5'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vnni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='ibrs-all'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='ClearwaterForest'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx-ifma'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx-ne-convert'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx-vnni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx-vnni-int16'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx-vnni-int8'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='bhi-ctrl'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='bhi-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='bus-lock-detect'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='cldemote'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='cmpccxadd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='ddpd-u'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fbsdp-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fsrm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fsrs'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='gfni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='ibrs-all'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='intel-psfd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='ipred-ctrl'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='lam'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='mcdt-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='movdir64b'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='movdiri'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pbrsb-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='prefetchiti'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='psdp-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='rrsba-ctrl'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='sbdr-ssdp-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='serialize'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='sha512'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='sm3'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='sm4'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='ss'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='vaes'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='vpclmulqdq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='ClearwaterForest-v1'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx-ifma'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx-ne-convert'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx-vnni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx-vnni-int16'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx-vnni-int8'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='bhi-ctrl'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='bhi-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='bus-lock-detect'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='cldemote'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='cmpccxadd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='ddpd-u'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fbsdp-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fsrm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fsrs'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='gfni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='ibrs-all'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='intel-psfd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='ipred-ctrl'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='lam'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='mcdt-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='movdir64b'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='movdiri'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pbrsb-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='prefetchiti'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='psdp-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='rrsba-ctrl'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='sbdr-ssdp-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='serialize'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='sha512'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='sm3'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='sm4'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='ss'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='vaes'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='vpclmulqdq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Cooperlake'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512-bf16'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vnni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='hle'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='ibrs-all'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='rtm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='taa-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Cooperlake-v1'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512-bf16'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vnni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='hle'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='ibrs-all'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='rtm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='taa-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Cooperlake-v2'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512-bf16'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vnni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='hle'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='ibrs-all'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='rtm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='taa-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Denverton'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='mpx'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Denverton-v1'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='mpx'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Denverton-v2'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Denverton-v3'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Dhyana-v2'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='EPYC-Genoa'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='amd-psfd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='auto-ibrs'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512-bf16'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bitalg'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512ifma'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vbmi'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vbmi2'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vnni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fsrm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='gfni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='la57'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='no-nested-data-bp'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='null-sel-clr-base'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='stibp-always-on'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='vaes'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='vpclmulqdq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='EPYC-Genoa-v1'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='amd-psfd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='auto-ibrs'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512-bf16'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bitalg'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512ifma'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vbmi'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vbmi2'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vnni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fsrm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='gfni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='la57'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='no-nested-data-bp'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='null-sel-clr-base'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='stibp-always-on'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='vaes'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='vpclmulqdq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='EPYC-Genoa-v2'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='amd-psfd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='auto-ibrs'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512-bf16'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bitalg'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512ifma'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vbmi'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vbmi2'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vnni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fs-gs-base-ns'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fsrm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='gfni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='la57'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='no-nested-data-bp'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='null-sel-clr-base'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='perfmon-v2'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='stibp-always-on'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='vaes'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='vpclmulqdq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='EPYC-Milan'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fsrm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='EPYC-Milan-v1'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fsrm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='EPYC-Milan-v2'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='amd-psfd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fsrm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='no-nested-data-bp'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='null-sel-clr-base'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='stibp-always-on'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='vaes'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='vpclmulqdq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='EPYC-Milan-v3'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='amd-psfd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fsrm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='no-nested-data-bp'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='null-sel-clr-base'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='stibp-always-on'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='vaes'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='vpclmulqdq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='EPYC-Rome'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='EPYC-Rome-v1'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='EPYC-Rome-v2'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='EPYC-Rome-v3'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='EPYC-Turin'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='amd-psfd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='auto-ibrs'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx-vnni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512-bf16'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512-vp2intersect'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bitalg'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512ifma'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vbmi'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vbmi2'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vnni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fs-gs-base-ns'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fsrm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='gfni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='ibpb-brtype'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='la57'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='movdir64b'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='movdiri'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='no-nested-data-bp'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='null-sel-clr-base'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='perfmon-v2'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='prefetchi'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='sbpb'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='srso-user-kernel-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='stibp-always-on'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='vaes'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='vpclmulqdq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='EPYC-Turin-v1'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='amd-psfd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='auto-ibrs'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx-vnni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512-bf16'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512-vp2intersect'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bitalg'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512ifma'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vbmi'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vbmi2'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vnni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fs-gs-base-ns'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fsrm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='gfni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='ibpb-brtype'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='la57'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='movdir64b'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='movdiri'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='no-nested-data-bp'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='null-sel-clr-base'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='perfmon-v2'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='prefetchi'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='sbpb'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='srso-user-kernel-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='stibp-always-on'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='vaes'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='vpclmulqdq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='EPYC-v3'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='EPYC-v4'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='EPYC-v5'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='GraniteRapids'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='amx-bf16'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='amx-fp16'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='amx-int8'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='amx-tile'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx-vnni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512-bf16'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512-fp16'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bitalg'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512ifma'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vbmi'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vbmi2'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vnni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='bus-lock-detect'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fbsdp-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fsrc'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fsrm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fsrs'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fzrm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='gfni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='hle'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='ibrs-all'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='la57'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='mcdt-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pbrsb-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='prefetchiti'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='psdp-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='rtm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='sbdr-ssdp-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='serialize'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='taa-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='tsx-ldtrk'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='vaes'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='vpclmulqdq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='xfd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='GraniteRapids-v1'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='amx-bf16'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='amx-fp16'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='amx-int8'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='amx-tile'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx-vnni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512-bf16'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512-fp16'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bitalg'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512ifma'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vbmi'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vbmi2'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vnni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='bus-lock-detect'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fbsdp-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fsrc'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fsrm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fsrs'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fzrm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='gfni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='hle'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='ibrs-all'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='la57'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='mcdt-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pbrsb-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='prefetchiti'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='psdp-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='rtm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='sbdr-ssdp-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='serialize'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='taa-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='tsx-ldtrk'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='vaes'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='vpclmulqdq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='xfd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='GraniteRapids-v2'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='amx-bf16'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='amx-fp16'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='amx-int8'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='amx-tile'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx-vnni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx10'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx10-128'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx10-256'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx10-512'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512-bf16'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512-fp16'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bitalg'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512ifma'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vbmi'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vbmi2'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vnni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='bus-lock-detect'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='cldemote'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fbsdp-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fsrc'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fsrm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fsrs'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fzrm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='gfni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='hle'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='ibrs-all'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='la57'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='mcdt-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='movdir64b'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='movdiri'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pbrsb-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='prefetchiti'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='psdp-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='rtm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='sbdr-ssdp-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='serialize'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='ss'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='taa-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='tsx-ldtrk'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='vaes'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='vpclmulqdq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='xfd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='GraniteRapids-v3'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='amx-bf16'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='amx-fp16'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='amx-int8'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='amx-tile'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx-vnni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx10'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx10-128'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx10-256'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx10-512'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512-bf16'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512-fp16'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bitalg'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512ifma'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vbmi'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vbmi2'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vnni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='bus-lock-detect'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='cldemote'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fbsdp-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fsrc'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fsrm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fsrs'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fzrm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='gfni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='hle'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='ibrs-all'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='la57'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='mcdt-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='movdir64b'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='movdiri'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pbrsb-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='prefetchiti'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='psdp-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='rtm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='sbdr-ssdp-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='serialize'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='ss'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='taa-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='tsx-ldtrk'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='vaes'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='vpclmulqdq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='xfd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Haswell'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='hle'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='rtm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Haswell-IBRS'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='hle'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='rtm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Haswell-noTSX'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Haswell-v1'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='hle'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='rtm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Haswell-v2'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Haswell-v3'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='hle'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='rtm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Haswell-v4'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Icelake-Server'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bitalg'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vbmi'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vbmi2'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vnni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='gfni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='hle'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='la57'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='rtm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='vaes'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='vpclmulqdq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Icelake-Server-noTSX'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bitalg'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vbmi'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vbmi2'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vnni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='gfni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='la57'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='vaes'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='vpclmulqdq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Icelake-Server-v1'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bitalg'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vbmi'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vbmi2'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vnni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='gfni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='hle'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='la57'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='rtm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='vaes'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='vpclmulqdq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Icelake-Server-v2'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bitalg'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vbmi'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vbmi2'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vnni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='gfni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='la57'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='vaes'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='vpclmulqdq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Icelake-Server-v3'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bitalg'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vbmi'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vbmi2'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vnni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='gfni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='ibrs-all'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='la57'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='taa-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='vaes'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='vpclmulqdq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Icelake-Server-v4'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bitalg'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512ifma'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vbmi'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vbmi2'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vnni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fsrm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='gfni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='ibrs-all'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='la57'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='taa-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='vaes'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='vpclmulqdq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Icelake-Server-v5'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bitalg'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512ifma'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vbmi'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vbmi2'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vnni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fsrm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='gfni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='ibrs-all'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='la57'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='taa-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='vaes'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='vpclmulqdq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Icelake-Server-v6'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bitalg'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512ifma'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vbmi'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vbmi2'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vnni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fsrm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='gfni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='ibrs-all'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='la57'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='taa-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='vaes'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='vpclmulqdq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Icelake-Server-v7'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bitalg'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512ifma'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vbmi'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vbmi2'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vnni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fsrm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='gfni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='hle'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='ibrs-all'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='la57'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='rtm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='taa-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='vaes'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='vpclmulqdq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='IvyBridge'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='IvyBridge-IBRS'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='IvyBridge-v1'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='IvyBridge-v2'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='KnightsMill'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512-4fmaps'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512-4vnniw'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512er'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512pf'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='ss'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='KnightsMill-v1'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512-4fmaps'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512-4vnniw'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512er'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512pf'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='ss'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Opteron_G4'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fma4'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='xop'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Opteron_G4-v1'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fma4'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='xop'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Opteron_G5'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fma4'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='tbm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='xop'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Opteron_G5-v1'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fma4'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='tbm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='xop'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='SapphireRapids'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='amx-bf16'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='amx-int8'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='amx-tile'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx-vnni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512-bf16'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512-fp16'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bitalg'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512ifma'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vbmi'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vbmi2'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vnni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='bus-lock-detect'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fsrc'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fsrm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fsrs'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fzrm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='gfni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='hle'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='ibrs-all'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='la57'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='rtm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='serialize'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='taa-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='tsx-ldtrk'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='vaes'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='vpclmulqdq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='xfd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='SapphireRapids-v1'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='amx-bf16'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='amx-int8'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='amx-tile'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx-vnni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512-bf16'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512-fp16'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bitalg'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512ifma'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vbmi'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vbmi2'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vnni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='bus-lock-detect'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fsrc'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fsrm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fsrs'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fzrm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='gfni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='hle'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='ibrs-all'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='la57'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='rtm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='serialize'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='taa-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='tsx-ldtrk'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='vaes'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='vpclmulqdq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='xfd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='SapphireRapids-v2'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='amx-bf16'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='amx-int8'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='amx-tile'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx-vnni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512-bf16'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512-fp16'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bitalg'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512ifma'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vbmi'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vbmi2'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vnni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='bus-lock-detect'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fbsdp-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fsrc'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fsrm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fsrs'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fzrm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='gfni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='hle'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='ibrs-all'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='la57'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='psdp-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='rtm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='sbdr-ssdp-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='serialize'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='taa-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='tsx-ldtrk'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='vaes'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='vpclmulqdq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='xfd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='SapphireRapids-v3'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='amx-bf16'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='amx-int8'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='amx-tile'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx-vnni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512-bf16'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512-fp16'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bitalg'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512ifma'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vbmi'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vbmi2'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vnni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='bus-lock-detect'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='cldemote'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fbsdp-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fsrc'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fsrm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fsrs'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fzrm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='gfni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='hle'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='ibrs-all'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='la57'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='movdir64b'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='movdiri'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='psdp-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='rtm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='sbdr-ssdp-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='serialize'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='ss'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='taa-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='tsx-ldtrk'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='vaes'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='vpclmulqdq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='xfd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='SapphireRapids-v4'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='amx-bf16'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='amx-int8'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='amx-tile'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx-vnni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512-bf16'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512-fp16'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bitalg'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512ifma'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vbmi'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vbmi2'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vnni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='bus-lock-detect'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='cldemote'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fbsdp-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fsrc'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fsrm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fsrs'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fzrm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='gfni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='hle'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='ibrs-all'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='la57'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='movdir64b'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='movdiri'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='psdp-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='rtm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='sbdr-ssdp-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='serialize'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='ss'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='taa-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='tsx-ldtrk'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='vaes'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='vpclmulqdq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='xfd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='SierraForest'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx-ifma'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx-ne-convert'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx-vnni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx-vnni-int8'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='bus-lock-detect'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='cmpccxadd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fbsdp-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fsrm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fsrs'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='gfni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='ibrs-all'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='mcdt-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pbrsb-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='psdp-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='sbdr-ssdp-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='serialize'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='vaes'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='vpclmulqdq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='SierraForest-v1'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx-ifma'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx-ne-convert'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx-vnni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx-vnni-int8'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='bus-lock-detect'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='cmpccxadd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fbsdp-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fsrm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fsrs'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='gfni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='ibrs-all'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='mcdt-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pbrsb-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='psdp-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='sbdr-ssdp-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='serialize'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='vaes'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='vpclmulqdq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='SierraForest-v2'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx-ifma'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx-ne-convert'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx-vnni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx-vnni-int8'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='bhi-ctrl'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='bus-lock-detect'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='cldemote'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='cmpccxadd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fbsdp-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fsrm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fsrs'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='gfni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='ibrs-all'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='intel-psfd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='ipred-ctrl'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='lam'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='mcdt-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='movdir64b'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='movdiri'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pbrsb-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='psdp-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='rrsba-ctrl'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='sbdr-ssdp-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='serialize'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='ss'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='vaes'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='vpclmulqdq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='SierraForest-v3'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx-ifma'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx-ne-convert'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx-vnni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx-vnni-int8'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='bhi-ctrl'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='bus-lock-detect'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='cldemote'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='cmpccxadd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fbsdp-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fsrm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fsrs'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='gfni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='ibrs-all'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='intel-psfd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='ipred-ctrl'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='lam'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='mcdt-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='movdir64b'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='movdiri'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pbrsb-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='psdp-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='rrsba-ctrl'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='sbdr-ssdp-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='serialize'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='ss'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='vaes'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='vpclmulqdq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Skylake-Client'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='hle'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='rtm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Skylake-Client-IBRS'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='hle'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='rtm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Skylake-Client-v1'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='hle'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='rtm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Skylake-Client-v2'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='hle'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='rtm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Skylake-Client-v3'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Skylake-Client-v4'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Skylake-Server'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='hle'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='rtm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Skylake-Server-IBRS'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='hle'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='rtm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Skylake-Server-v1'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='hle'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='rtm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Skylake-Server-v2'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='hle'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='rtm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Skylake-Server-v3'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Skylake-Server-v4'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Skylake-Server-v5'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Snowridge'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='cldemote'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='core-capability'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='gfni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='movdir64b'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='movdiri'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='mpx'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='split-lock-detect'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Snowridge-v1'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='cldemote'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='core-capability'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='gfni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='movdir64b'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='movdiri'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='mpx'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='split-lock-detect'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Snowridge-v2'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='cldemote'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='core-capability'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='gfni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='movdir64b'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='movdiri'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='split-lock-detect'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Snowridge-v3'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='cldemote'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='core-capability'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='gfni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='movdir64b'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='movdiri'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='split-lock-detect'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Snowridge-v4'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='cldemote'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='gfni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='movdir64b'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='movdiri'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='athlon'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='3dnow'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='3dnowext'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='athlon-v1'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='3dnow'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='3dnowext'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='core2duo'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='ss'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='core2duo-v1'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='ss'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='coreduo'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='ss'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='coreduo-v1'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='ss'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='n270'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='ss'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='n270-v1'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='ss'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='phenom'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='3dnow'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='3dnowext'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='phenom-v1'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='3dnow'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='3dnowext'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     </mode>
Jan 23 11:37:58 compute-0 nova_compute[184255]:   </cpu>
Jan 23 11:37:58 compute-0 nova_compute[184255]:   <memoryBacking supported='yes'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     <enum name='sourceType'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <value>file</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <value>anonymous</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <value>memfd</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     </enum>
Jan 23 11:37:58 compute-0 nova_compute[184255]:   </memoryBacking>
Jan 23 11:37:58 compute-0 nova_compute[184255]:   <devices>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     <disk supported='yes'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <enum name='diskDevice'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>disk</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>cdrom</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>floppy</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>lun</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </enum>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <enum name='bus'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>ide</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>fdc</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>scsi</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>virtio</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>usb</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>sata</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </enum>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <enum name='model'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>virtio</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>virtio-transitional</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>virtio-non-transitional</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </enum>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     </disk>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     <graphics supported='yes'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <enum name='type'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>vnc</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>egl-headless</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>dbus</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </enum>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     </graphics>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     <video supported='yes'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <enum name='modelType'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>vga</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>cirrus</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>virtio</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>none</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>bochs</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>ramfb</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </enum>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     </video>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     <hostdev supported='yes'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <enum name='mode'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>subsystem</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </enum>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <enum name='startupPolicy'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>default</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>mandatory</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>requisite</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>optional</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </enum>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <enum name='subsysType'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>usb</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>pci</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>scsi</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </enum>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <enum name='capsType'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <enum name='pciBackend'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     </hostdev>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     <rng supported='yes'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <enum name='model'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>virtio</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>virtio-transitional</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>virtio-non-transitional</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </enum>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <enum name='backendModel'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>random</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>egd</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>builtin</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </enum>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     </rng>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     <filesystem supported='yes'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <enum name='driverType'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>path</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>handle</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>virtiofs</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </enum>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     </filesystem>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     <tpm supported='yes'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <enum name='model'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>tpm-tis</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>tpm-crb</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </enum>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <enum name='backendModel'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>emulator</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>external</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </enum>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <enum name='backendVersion'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>2.0</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </enum>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     </tpm>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     <redirdev supported='yes'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <enum name='bus'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>usb</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </enum>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     </redirdev>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     <channel supported='yes'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <enum name='type'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>pty</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>unix</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </enum>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     </channel>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     <crypto supported='yes'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <enum name='model'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <enum name='type'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>qemu</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </enum>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <enum name='backendModel'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>builtin</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </enum>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     </crypto>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     <interface supported='yes'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <enum name='backendType'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>default</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>passt</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </enum>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     </interface>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     <panic supported='yes'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <enum name='model'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>isa</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>hyperv</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </enum>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     </panic>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     <console supported='yes'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <enum name='type'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>null</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>vc</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>pty</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>dev</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>file</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>pipe</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>stdio</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>udp</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>tcp</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>unix</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>qemu-vdagent</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>dbus</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </enum>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     </console>
Jan 23 11:37:58 compute-0 nova_compute[184255]:   </devices>
Jan 23 11:37:58 compute-0 nova_compute[184255]:   <features>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     <gic supported='no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     <vmcoreinfo supported='yes'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     <genid supported='yes'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     <backingStoreInput supported='yes'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     <backup supported='yes'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     <async-teardown supported='yes'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     <s390-pv supported='no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     <ps2 supported='yes'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     <tdx supported='no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     <sev supported='no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     <sgx supported='no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     <hyperv supported='yes'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <enum name='features'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>relaxed</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>vapic</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>spinlocks</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>vpindex</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>runtime</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>synic</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>stimer</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>reset</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>vendor_id</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>frequencies</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>reenlightenment</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>tlbflush</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>ipi</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>avic</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>emsr_bitmap</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>xmm_input</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </enum>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <defaults>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <spinlocks>4095</spinlocks>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <stimer_direct>on</stimer_direct>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <tlbflush_direct>on</tlbflush_direct>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <tlbflush_extended>on</tlbflush_extended>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </defaults>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     </hyperv>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     <launchSecurity supported='no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:   </features>
Jan 23 11:37:58 compute-0 nova_compute[184255]: </domainCapabilities>
Jan 23 11:37:58 compute-0 nova_compute[184255]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 23 11:37:58 compute-0 nova_compute[184255]: 2026-01-23 11:37:58.041 184259 DEBUG nova.virt.libvirt.host [None req-35674d47-012b-4158-9100-0041287f554b - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Jan 23 11:37:58 compute-0 nova_compute[184255]: <domainCapabilities>
Jan 23 11:37:58 compute-0 nova_compute[184255]:   <path>/usr/libexec/qemu-kvm</path>
Jan 23 11:37:58 compute-0 nova_compute[184255]:   <domain>kvm</domain>
Jan 23 11:37:58 compute-0 nova_compute[184255]:   <machine>pc-q35-rhel9.8.0</machine>
Jan 23 11:37:58 compute-0 nova_compute[184255]:   <arch>x86_64</arch>
Jan 23 11:37:58 compute-0 nova_compute[184255]:   <vcpu max='4096'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:   <iothreads supported='yes'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:   <os supported='yes'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     <enum name='firmware'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <value>efi</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     </enum>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     <loader supported='yes'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <enum name='type'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>rom</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>pflash</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </enum>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <enum name='readonly'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>yes</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>no</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </enum>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <enum name='secure'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>yes</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>no</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </enum>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     </loader>
Jan 23 11:37:58 compute-0 nova_compute[184255]:   </os>
Jan 23 11:37:58 compute-0 nova_compute[184255]:   <cpu>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     <mode name='host-passthrough' supported='yes'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <enum name='hostPassthroughMigratable'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>on</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>off</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </enum>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     </mode>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     <mode name='maximum' supported='yes'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <enum name='maximumMigratable'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>on</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>off</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </enum>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     </mode>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     <mode name='host-model' supported='yes'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <vendor>AMD</vendor>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <feature policy='require' name='x2apic'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <feature policy='require' name='tsc-deadline'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <feature policy='require' name='hypervisor'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <feature policy='require' name='tsc_adjust'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <feature policy='require' name='spec-ctrl'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <feature policy='require' name='stibp'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <feature policy='require' name='ssbd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <feature policy='require' name='cmp_legacy'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <feature policy='require' name='overflow-recov'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <feature policy='require' name='succor'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <feature policy='require' name='ibrs'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <feature policy='require' name='amd-ssbd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <feature policy='require' name='virt-ssbd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <feature policy='require' name='lbrv'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <feature policy='require' name='tsc-scale'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <feature policy='require' name='vmcb-clean'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <feature policy='require' name='flushbyasid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <feature policy='require' name='pause-filter'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <feature policy='require' name='pfthreshold'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <feature policy='require' name='svme-addr-chk'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <feature policy='disable' name='xsaves'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     </mode>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     <mode name='custom' supported='yes'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Broadwell'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='hle'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='rtm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Broadwell-IBRS'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='hle'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='rtm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Broadwell-noTSX'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Broadwell-v1'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='hle'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='rtm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Broadwell-v2'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Broadwell-v3'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='hle'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='rtm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Broadwell-v4'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Cascadelake-Server'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vnni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='hle'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='rtm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vnni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='ibrs-all'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Cascadelake-Server-v1'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vnni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='hle'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='rtm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Cascadelake-Server-v2'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vnni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='hle'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='ibrs-all'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='rtm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Cascadelake-Server-v3'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vnni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='ibrs-all'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Cascadelake-Server-v4'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vnni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='ibrs-all'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Cascadelake-Server-v5'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vnni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='ibrs-all'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='ClearwaterForest'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx-ifma'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx-ne-convert'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx-vnni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx-vnni-int16'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx-vnni-int8'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='bhi-ctrl'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='bhi-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='bus-lock-detect'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='cldemote'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='cmpccxadd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='ddpd-u'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fbsdp-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fsrm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fsrs'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='gfni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='ibrs-all'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='intel-psfd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='ipred-ctrl'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='lam'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='mcdt-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='movdir64b'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='movdiri'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pbrsb-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='prefetchiti'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='psdp-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='rrsba-ctrl'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='sbdr-ssdp-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='serialize'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='sha512'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='sm3'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='sm4'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='ss'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='vaes'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='vpclmulqdq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='ClearwaterForest-v1'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx-ifma'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx-ne-convert'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx-vnni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx-vnni-int16'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx-vnni-int8'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='bhi-ctrl'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='bhi-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='bus-lock-detect'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='cldemote'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='cmpccxadd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='ddpd-u'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fbsdp-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fsrm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fsrs'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='gfni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='ibrs-all'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='intel-psfd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='ipred-ctrl'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='lam'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='mcdt-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='movdir64b'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='movdiri'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pbrsb-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='prefetchiti'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='psdp-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='rrsba-ctrl'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='sbdr-ssdp-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='serialize'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='sha512'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='sm3'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='sm4'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='ss'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='vaes'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='vpclmulqdq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Cooperlake'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512-bf16'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vnni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='hle'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='ibrs-all'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='rtm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='taa-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Cooperlake-v1'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512-bf16'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vnni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='hle'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='ibrs-all'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='rtm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='taa-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Cooperlake-v2'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512-bf16'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vnni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='hle'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='ibrs-all'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='rtm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='taa-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Denverton'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='mpx'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Denverton-v1'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='mpx'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Denverton-v2'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Denverton-v3'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Dhyana-v2'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='EPYC-Genoa'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='amd-psfd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='auto-ibrs'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512-bf16'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bitalg'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512ifma'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vbmi'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vbmi2'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vnni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fsrm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='gfni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='la57'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='no-nested-data-bp'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='null-sel-clr-base'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='stibp-always-on'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='vaes'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='vpclmulqdq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='EPYC-Genoa-v1'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='amd-psfd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='auto-ibrs'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512-bf16'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bitalg'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512ifma'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vbmi'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vbmi2'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vnni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fsrm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='gfni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='la57'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='no-nested-data-bp'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='null-sel-clr-base'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='stibp-always-on'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='vaes'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='vpclmulqdq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='EPYC-Genoa-v2'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='amd-psfd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='auto-ibrs'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512-bf16'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bitalg'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512ifma'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vbmi'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vbmi2'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vnni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fs-gs-base-ns'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fsrm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='gfni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='la57'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='no-nested-data-bp'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='null-sel-clr-base'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='perfmon-v2'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='stibp-always-on'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='vaes'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='vpclmulqdq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='EPYC-Milan'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fsrm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='EPYC-Milan-v1'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fsrm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='EPYC-Milan-v2'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='amd-psfd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fsrm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='no-nested-data-bp'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='null-sel-clr-base'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='stibp-always-on'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='vaes'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='vpclmulqdq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='EPYC-Milan-v3'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='amd-psfd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fsrm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='no-nested-data-bp'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='null-sel-clr-base'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='stibp-always-on'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='vaes'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='vpclmulqdq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='EPYC-Rome'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='EPYC-Rome-v1'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='EPYC-Rome-v2'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='EPYC-Rome-v3'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='EPYC-Turin'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='amd-psfd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='auto-ibrs'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx-vnni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512-bf16'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512-vp2intersect'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bitalg'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512ifma'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vbmi'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vbmi2'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vnni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fs-gs-base-ns'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fsrm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='gfni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='ibpb-brtype'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='la57'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='movdir64b'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='movdiri'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='no-nested-data-bp'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='null-sel-clr-base'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='perfmon-v2'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='prefetchi'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='sbpb'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='srso-user-kernel-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='stibp-always-on'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='vaes'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='vpclmulqdq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='EPYC-Turin-v1'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='amd-psfd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='auto-ibrs'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx-vnni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512-bf16'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512-vp2intersect'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bitalg'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512ifma'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vbmi'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vbmi2'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vnni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fs-gs-base-ns'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fsrm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='gfni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='ibpb-brtype'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='la57'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='movdir64b'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='movdiri'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='no-nested-data-bp'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='null-sel-clr-base'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='perfmon-v2'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='prefetchi'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='sbpb'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='srso-user-kernel-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='stibp-always-on'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='vaes'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='vpclmulqdq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='EPYC-v3'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='EPYC-v4'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='EPYC-v5'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='GraniteRapids'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='amx-bf16'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='amx-fp16'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='amx-int8'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='amx-tile'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx-vnni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512-bf16'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512-fp16'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bitalg'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512ifma'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vbmi'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vbmi2'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vnni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='bus-lock-detect'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fbsdp-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fsrc'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fsrm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fsrs'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fzrm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='gfni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='hle'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='ibrs-all'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='la57'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='mcdt-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pbrsb-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='prefetchiti'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='psdp-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='rtm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='sbdr-ssdp-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='serialize'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='taa-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='tsx-ldtrk'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='vaes'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='vpclmulqdq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='xfd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='GraniteRapids-v1'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='amx-bf16'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='amx-fp16'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='amx-int8'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='amx-tile'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx-vnni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512-bf16'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512-fp16'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bitalg'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512ifma'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vbmi'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vbmi2'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vnni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='bus-lock-detect'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fbsdp-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fsrc'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fsrm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fsrs'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fzrm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='gfni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='hle'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='ibrs-all'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='la57'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='mcdt-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pbrsb-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='prefetchiti'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='psdp-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='rtm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='sbdr-ssdp-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='serialize'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='taa-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='tsx-ldtrk'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='vaes'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='vpclmulqdq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='xfd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='GraniteRapids-v2'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='amx-bf16'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='amx-fp16'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='amx-int8'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='amx-tile'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx-vnni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx10'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx10-128'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx10-256'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx10-512'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512-bf16'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512-fp16'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bitalg'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512ifma'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vbmi'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vbmi2'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vnni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='bus-lock-detect'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='cldemote'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fbsdp-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fsrc'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fsrm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fsrs'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fzrm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='gfni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='hle'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='ibrs-all'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='la57'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='mcdt-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='movdir64b'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='movdiri'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pbrsb-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='prefetchiti'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='psdp-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='rtm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='sbdr-ssdp-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='serialize'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='ss'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='taa-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='tsx-ldtrk'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='vaes'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='vpclmulqdq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='xfd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='GraniteRapids-v3'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='amx-bf16'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='amx-fp16'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='amx-int8'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='amx-tile'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx-vnni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx10'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx10-128'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx10-256'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx10-512'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512-bf16'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512-fp16'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bitalg'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512ifma'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vbmi'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vbmi2'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vnni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='bus-lock-detect'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='cldemote'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fbsdp-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fsrc'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fsrm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fsrs'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fzrm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='gfni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='hle'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='ibrs-all'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='la57'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='mcdt-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='movdir64b'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='movdiri'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pbrsb-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='prefetchiti'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='psdp-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='rtm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='sbdr-ssdp-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='serialize'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='ss'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='taa-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='tsx-ldtrk'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='vaes'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='vpclmulqdq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='xfd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Haswell'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='hle'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='rtm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Haswell-IBRS'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='hle'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='rtm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Haswell-noTSX'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Haswell-v1'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='hle'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='rtm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Haswell-v2'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Haswell-v3'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='hle'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='rtm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Haswell-v4'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Icelake-Server'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bitalg'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vbmi'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vbmi2'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vnni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='gfni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='hle'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='la57'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='rtm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='vaes'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='vpclmulqdq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Icelake-Server-noTSX'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bitalg'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vbmi'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vbmi2'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vnni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='gfni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='la57'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='vaes'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='vpclmulqdq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Icelake-Server-v1'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bitalg'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vbmi'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vbmi2'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vnni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='gfni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='hle'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='la57'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='rtm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='vaes'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='vpclmulqdq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Icelake-Server-v2'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bitalg'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vbmi'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vbmi2'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vnni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='gfni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='la57'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='vaes'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='vpclmulqdq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Icelake-Server-v3'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bitalg'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vbmi'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vbmi2'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vnni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='gfni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='ibrs-all'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='la57'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='taa-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='vaes'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='vpclmulqdq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Icelake-Server-v4'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bitalg'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512ifma'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vbmi'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vbmi2'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vnni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fsrm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='gfni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='ibrs-all'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='la57'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='taa-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='vaes'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='vpclmulqdq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Icelake-Server-v5'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bitalg'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512ifma'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vbmi'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vbmi2'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vnni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fsrm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='gfni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='ibrs-all'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='la57'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='taa-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='vaes'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='vpclmulqdq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Icelake-Server-v6'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bitalg'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512ifma'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vbmi'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vbmi2'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vnni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fsrm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='gfni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='ibrs-all'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='la57'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='taa-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='vaes'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='vpclmulqdq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Icelake-Server-v7'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bitalg'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512ifma'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vbmi'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vbmi2'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vnni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fsrm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='gfni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='hle'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='ibrs-all'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='la57'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='rtm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='taa-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='vaes'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='vpclmulqdq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='IvyBridge'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='IvyBridge-IBRS'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='IvyBridge-v1'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='IvyBridge-v2'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='KnightsMill'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512-4fmaps'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512-4vnniw'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512er'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512pf'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='ss'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='KnightsMill-v1'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512-4fmaps'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512-4vnniw'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512er'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512pf'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='ss'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Opteron_G4'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fma4'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='xop'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Opteron_G4-v1'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fma4'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='xop'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Opteron_G5'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fma4'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='tbm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='xop'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Opteron_G5-v1'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fma4'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='tbm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='xop'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='SapphireRapids'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='amx-bf16'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='amx-int8'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='amx-tile'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx-vnni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512-bf16'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512-fp16'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bitalg'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512ifma'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vbmi'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vbmi2'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vnni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='bus-lock-detect'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fsrc'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fsrm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fsrs'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fzrm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='gfni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='hle'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='ibrs-all'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='la57'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='rtm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='serialize'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='taa-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='tsx-ldtrk'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='vaes'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='vpclmulqdq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='xfd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='SapphireRapids-v1'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='amx-bf16'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='amx-int8'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='amx-tile'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx-vnni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512-bf16'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512-fp16'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bitalg'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512ifma'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vbmi'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vbmi2'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vnni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='bus-lock-detect'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fsrc'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fsrm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fsrs'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fzrm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='gfni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='hle'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='ibrs-all'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='la57'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='rtm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='serialize'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='taa-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='tsx-ldtrk'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='vaes'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='vpclmulqdq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='xfd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='SapphireRapids-v2'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='amx-bf16'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='amx-int8'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='amx-tile'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx-vnni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512-bf16'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512-fp16'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bitalg'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512ifma'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vbmi'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vbmi2'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vnni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='bus-lock-detect'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fbsdp-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fsrc'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fsrm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fsrs'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fzrm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='gfni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='hle'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='ibrs-all'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='la57'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='psdp-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='rtm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='sbdr-ssdp-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='serialize'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='taa-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='tsx-ldtrk'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='vaes'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='vpclmulqdq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='xfd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='SapphireRapids-v3'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='amx-bf16'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='amx-int8'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='amx-tile'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx-vnni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512-bf16'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512-fp16'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bitalg'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512ifma'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vbmi'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vbmi2'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vnni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='bus-lock-detect'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='cldemote'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fbsdp-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fsrc'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fsrm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fsrs'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fzrm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='gfni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='hle'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='ibrs-all'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='la57'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='movdir64b'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='movdiri'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='psdp-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='rtm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='sbdr-ssdp-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='serialize'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='ss'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='taa-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='tsx-ldtrk'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='vaes'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='vpclmulqdq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='xfd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='SapphireRapids-v4'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='amx-bf16'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='amx-int8'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='amx-tile'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx-vnni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512-bf16'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512-fp16'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bitalg'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512ifma'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vbmi'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vbmi2'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vnni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='bus-lock-detect'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='cldemote'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fbsdp-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fsrc'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fsrm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fsrs'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fzrm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='gfni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='hle'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='ibrs-all'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='la57'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='movdir64b'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='movdiri'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='psdp-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='rtm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='sbdr-ssdp-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='serialize'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='ss'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='taa-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='tsx-ldtrk'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='vaes'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='vpclmulqdq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='xfd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='SierraForest'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx-ifma'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx-ne-convert'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx-vnni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx-vnni-int8'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='bus-lock-detect'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='cmpccxadd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fbsdp-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fsrm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fsrs'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='gfni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='ibrs-all'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='mcdt-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pbrsb-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='psdp-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='sbdr-ssdp-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='serialize'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='vaes'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='vpclmulqdq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='SierraForest-v1'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx-ifma'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx-ne-convert'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx-vnni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx-vnni-int8'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='bus-lock-detect'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='cmpccxadd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fbsdp-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fsrm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fsrs'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='gfni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='ibrs-all'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='mcdt-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pbrsb-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='psdp-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='sbdr-ssdp-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='serialize'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='vaes'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='vpclmulqdq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='SierraForest-v2'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx-ifma'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx-ne-convert'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx-vnni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx-vnni-int8'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='bhi-ctrl'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='bus-lock-detect'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='cldemote'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='cmpccxadd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fbsdp-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fsrm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fsrs'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='gfni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='ibrs-all'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='intel-psfd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='ipred-ctrl'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='lam'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='mcdt-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='movdir64b'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='movdiri'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pbrsb-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='psdp-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='rrsba-ctrl'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='sbdr-ssdp-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='serialize'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='ss'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='vaes'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='vpclmulqdq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='SierraForest-v3'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx-ifma'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx-ne-convert'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx-vnni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx-vnni-int8'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='bhi-ctrl'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='bus-lock-detect'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='cldemote'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='cmpccxadd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fbsdp-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fsrm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='fsrs'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='gfni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='ibrs-all'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='intel-psfd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='ipred-ctrl'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='lam'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='mcdt-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='movdir64b'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='movdiri'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pbrsb-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='psdp-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='rrsba-ctrl'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='sbdr-ssdp-no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='serialize'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='ss'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='vaes'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='vpclmulqdq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Skylake-Client'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='hle'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='rtm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Skylake-Client-IBRS'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='hle'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='rtm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Skylake-Client-v1'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='hle'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='rtm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Skylake-Client-v2'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='hle'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='rtm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Skylake-Client-v3'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Skylake-Client-v4'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Skylake-Server'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='hle'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='rtm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Skylake-Server-IBRS'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='hle'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='rtm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Skylake-Server-v1'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='hle'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='rtm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Skylake-Server-v2'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='hle'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='rtm'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Skylake-Server-v3'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Skylake-Server-v4'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Skylake-Server-v5'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512bw'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512cd'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512dq'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512f'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='avx512vl'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='invpcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pcid'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='pku'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Snowridge'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='cldemote'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='core-capability'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='gfni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='movdir64b'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='movdiri'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='mpx'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='split-lock-detect'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Snowridge-v1'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='cldemote'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='core-capability'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='gfni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='movdir64b'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='movdiri'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='mpx'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='split-lock-detect'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Snowridge-v2'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='cldemote'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='core-capability'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='gfni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='movdir64b'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='movdiri'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='split-lock-detect'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Snowridge-v3'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='cldemote'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='core-capability'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='gfni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='movdir64b'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='movdiri'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='split-lock-detect'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='Snowridge-v4'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='cldemote'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='erms'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='gfni'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='movdir64b'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='movdiri'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='xsaves'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='athlon'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='3dnow'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='3dnowext'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='athlon-v1'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='3dnow'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='3dnowext'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='core2duo'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='ss'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='core2duo-v1'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='ss'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='coreduo'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='ss'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='coreduo-v1'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='ss'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='n270'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='ss'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='n270-v1'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='ss'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='phenom'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='3dnow'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='3dnowext'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <blockers model='phenom-v1'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='3dnow'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <feature name='3dnowext'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </blockers>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     </mode>
Jan 23 11:37:58 compute-0 nova_compute[184255]:   </cpu>
Jan 23 11:37:58 compute-0 nova_compute[184255]:   <memoryBacking supported='yes'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     <enum name='sourceType'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <value>file</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <value>anonymous</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <value>memfd</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     </enum>
Jan 23 11:37:58 compute-0 nova_compute[184255]:   </memoryBacking>
Jan 23 11:37:58 compute-0 nova_compute[184255]:   <devices>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     <disk supported='yes'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <enum name='diskDevice'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>disk</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>cdrom</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>floppy</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>lun</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </enum>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <enum name='bus'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>fdc</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>scsi</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>virtio</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>usb</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>sata</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </enum>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <enum name='model'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>virtio</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>virtio-transitional</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>virtio-non-transitional</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </enum>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     </disk>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     <graphics supported='yes'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <enum name='type'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>vnc</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>egl-headless</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>dbus</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </enum>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     </graphics>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     <video supported='yes'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <enum name='modelType'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>vga</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>cirrus</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>virtio</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>none</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>bochs</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>ramfb</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </enum>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     </video>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     <hostdev supported='yes'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <enum name='mode'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>subsystem</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </enum>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <enum name='startupPolicy'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>default</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>mandatory</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>requisite</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>optional</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </enum>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <enum name='subsysType'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>usb</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>pci</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>scsi</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </enum>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <enum name='capsType'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <enum name='pciBackend'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     </hostdev>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     <rng supported='yes'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <enum name='model'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>virtio</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>virtio-transitional</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>virtio-non-transitional</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </enum>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <enum name='backendModel'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>random</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>egd</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>builtin</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </enum>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     </rng>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     <filesystem supported='yes'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <enum name='driverType'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>path</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>handle</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>virtiofs</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </enum>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     </filesystem>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     <tpm supported='yes'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <enum name='model'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>tpm-tis</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>tpm-crb</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </enum>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <enum name='backendModel'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>emulator</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>external</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </enum>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <enum name='backendVersion'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>2.0</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </enum>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     </tpm>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     <redirdev supported='yes'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <enum name='bus'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>usb</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </enum>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     </redirdev>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     <channel supported='yes'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <enum name='type'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>pty</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>unix</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </enum>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     </channel>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     <crypto supported='yes'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <enum name='model'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <enum name='type'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>qemu</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </enum>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <enum name='backendModel'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>builtin</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </enum>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     </crypto>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     <interface supported='yes'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <enum name='backendType'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>default</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>passt</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </enum>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     </interface>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     <panic supported='yes'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <enum name='model'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>isa</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>hyperv</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </enum>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     </panic>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     <console supported='yes'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <enum name='type'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>null</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>vc</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>pty</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>dev</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>file</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>pipe</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>stdio</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>udp</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>tcp</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>unix</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>qemu-vdagent</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>dbus</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </enum>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     </console>
Jan 23 11:37:58 compute-0 nova_compute[184255]:   </devices>
Jan 23 11:37:58 compute-0 nova_compute[184255]:   <features>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     <gic supported='no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     <vmcoreinfo supported='yes'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     <genid supported='yes'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     <backingStoreInput supported='yes'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     <backup supported='yes'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     <async-teardown supported='yes'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     <s390-pv supported='no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     <ps2 supported='yes'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     <tdx supported='no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     <sev supported='no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     <sgx supported='no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     <hyperv supported='yes'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <enum name='features'>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>relaxed</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>vapic</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>spinlocks</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>vpindex</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>runtime</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>synic</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>stimer</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>reset</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>vendor_id</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>frequencies</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>reenlightenment</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>tlbflush</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>ipi</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>avic</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>emsr_bitmap</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <value>xmm_input</value>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </enum>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       <defaults>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <spinlocks>4095</spinlocks>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <stimer_direct>on</stimer_direct>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <tlbflush_direct>on</tlbflush_direct>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <tlbflush_extended>on</tlbflush_extended>
Jan 23 11:37:58 compute-0 nova_compute[184255]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 23 11:37:58 compute-0 nova_compute[184255]:       </defaults>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     </hyperv>
Jan 23 11:37:58 compute-0 nova_compute[184255]:     <launchSecurity supported='no'/>
Jan 23 11:37:58 compute-0 nova_compute[184255]:   </features>
Jan 23 11:37:58 compute-0 nova_compute[184255]: </domainCapabilities>
Jan 23 11:37:58 compute-0 nova_compute[184255]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 23 11:37:58 compute-0 nova_compute[184255]: 2026-01-23 11:37:58.115 184259 DEBUG nova.virt.libvirt.host [None req-35674d47-012b-4158-9100-0041287f554b - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Jan 23 11:37:58 compute-0 nova_compute[184255]: 2026-01-23 11:37:58.116 184259 DEBUG nova.virt.libvirt.host [None req-35674d47-012b-4158-9100-0041287f554b - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Jan 23 11:37:58 compute-0 nova_compute[184255]: 2026-01-23 11:37:58.117 184259 DEBUG nova.virt.libvirt.host [None req-35674d47-012b-4158-9100-0041287f554b - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Jan 23 11:37:58 compute-0 nova_compute[184255]: 2026-01-23 11:37:58.121 184259 INFO nova.virt.libvirt.host [None req-35674d47-012b-4158-9100-0041287f554b - - - - - -] Secure Boot support detected
Jan 23 11:37:58 compute-0 nova_compute[184255]: 2026-01-23 11:37:58.123 184259 INFO nova.virt.libvirt.driver [None req-35674d47-012b-4158-9100-0041287f554b - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Jan 23 11:37:58 compute-0 nova_compute[184255]: 2026-01-23 11:37:58.123 184259 INFO nova.virt.libvirt.driver [None req-35674d47-012b-4158-9100-0041287f554b - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Jan 23 11:37:58 compute-0 nova_compute[184255]: 2026-01-23 11:37:58.136 184259 DEBUG nova.virt.libvirt.driver [None req-35674d47-012b-4158-9100-0041287f554b - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Jan 23 11:37:58 compute-0 nova_compute[184255]: 2026-01-23 11:37:58.188 184259 INFO nova.virt.node [None req-35674d47-012b-4158-9100-0041287f554b - - - - - -] Determined node identity 77dd020c-2f5c-40b0-b660-8a95a28aabbd from /var/lib/nova/compute_id
Jan 23 11:37:58 compute-0 python3.9[185112]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 23 11:37:58 compute-0 nova_compute[184255]: 2026-01-23 11:37:58.266 184259 WARNING nova.compute.manager [None req-35674d47-012b-4158-9100-0041287f554b - - - - - -] Compute nodes ['77dd020c-2f5c-40b0-b660-8a95a28aabbd'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.
Jan 23 11:37:58 compute-0 systemd[1]: Stopping nova_compute container...
Jan 23 11:37:58 compute-0 nova_compute[184255]: 2026-01-23 11:37:58.332 184259 DEBUG oslo_concurrency.lockutils [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 23 11:37:58 compute-0 nova_compute[184255]: 2026-01-23 11:37:58.333 184259 DEBUG oslo_concurrency.lockutils [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 23 11:37:58 compute-0 nova_compute[184255]: 2026-01-23 11:37:58.333 184259 DEBUG oslo_concurrency.lockutils [None req-e72aca2e-d710-4644-abd0-be803776b730 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 23 11:37:58 compute-0 virtqemud[184842]: libvirt version: 11.10.0, package: 2.el9 (builder@centos.org, 2025-12-18-15:09:54, )
Jan 23 11:37:58 compute-0 virtqemud[184842]: hostname: compute-0
Jan 23 11:37:58 compute-0 virtqemud[184842]: End of file while reading data: Input/output error
Jan 23 11:37:58 compute-0 systemd[1]: libpod-fd9cada4a983d888eaa352ebce7a53214545ca151757229d65da7e2fe4cdfb0b.scope: Deactivated successfully.
Jan 23 11:37:58 compute-0 systemd[1]: libpod-fd9cada4a983d888eaa352ebce7a53214545ca151757229d65da7e2fe4cdfb0b.scope: Consumed 2.947s CPU time.
Jan 23 11:37:58 compute-0 podman[185116]: 2026-01-23 11:37:58.739982178 +0000 UTC m=+0.439955607 container died fd9cada4a983d888eaa352ebce7a53214545ca151757229d65da7e2fe4cdfb0b (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_id=edpm, container_name=nova_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 11:37:58 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-fd9cada4a983d888eaa352ebce7a53214545ca151757229d65da7e2fe4cdfb0b-userdata-shm.mount: Deactivated successfully.
Jan 23 11:37:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-5f5c0d56ee6af3afd216696663d9f90e89cd8dd1ec85e3e77774fc295ed9e0ea-merged.mount: Deactivated successfully.
Jan 23 11:37:58 compute-0 podman[185116]: 2026-01-23 11:37:58.799396836 +0000 UTC m=+0.499370275 container cleanup fd9cada4a983d888eaa352ebce7a53214545ca151757229d65da7e2fe4cdfb0b (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=nova_compute, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=edpm, org.label-schema.license=GPLv2)
Jan 23 11:37:58 compute-0 podman[185116]: nova_compute
Jan 23 11:37:58 compute-0 podman[185144]: nova_compute
Jan 23 11:37:58 compute-0 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Jan 23 11:37:58 compute-0 systemd[1]: Stopped nova_compute container.
Jan 23 11:37:58 compute-0 systemd[1]: Starting nova_compute container...
Jan 23 11:37:58 compute-0 systemd[1]: Started libcrun container.
Jan 23 11:37:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f5c0d56ee6af3afd216696663d9f90e89cd8dd1ec85e3e77774fc295ed9e0ea/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Jan 23 11:37:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f5c0d56ee6af3afd216696663d9f90e89cd8dd1ec85e3e77774fc295ed9e0ea/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Jan 23 11:37:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f5c0d56ee6af3afd216696663d9f90e89cd8dd1ec85e3e77774fc295ed9e0ea/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Jan 23 11:37:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f5c0d56ee6af3afd216696663d9f90e89cd8dd1ec85e3e77774fc295ed9e0ea/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Jan 23 11:37:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f5c0d56ee6af3afd216696663d9f90e89cd8dd1ec85e3e77774fc295ed9e0ea/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Jan 23 11:37:58 compute-0 podman[185157]: 2026-01-23 11:37:58.998549023 +0000 UTC m=+0.096734844 container init fd9cada4a983d888eaa352ebce7a53214545ca151757229d65da7e2fe4cdfb0b (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.vendor=CentOS, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=nova_compute)
Jan 23 11:37:59 compute-0 podman[185157]: 2026-01-23 11:37:59.009431305 +0000 UTC m=+0.107617116 container start fd9cada4a983d888eaa352ebce7a53214545ca151757229d65da7e2fe4cdfb0b (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=edpm, tcib_managed=true, container_name=nova_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']})
Jan 23 11:37:59 compute-0 podman[185157]: nova_compute
Jan 23 11:37:59 compute-0 nova_compute[185173]: + sudo -E kolla_set_configs
Jan 23 11:37:59 compute-0 systemd[1]: Started nova_compute container.
Jan 23 11:37:59 compute-0 sudo[185110]: pam_unix(sudo:session): session closed for user root
Jan 23 11:37:59 compute-0 nova_compute[185173]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 23 11:37:59 compute-0 nova_compute[185173]: INFO:__main__:Validating config file
Jan 23 11:37:59 compute-0 nova_compute[185173]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 23 11:37:59 compute-0 nova_compute[185173]: INFO:__main__:Copying service configuration files
Jan 23 11:37:59 compute-0 nova_compute[185173]: INFO:__main__:Deleting /etc/nova/nova.conf
Jan 23 11:37:59 compute-0 nova_compute[185173]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Jan 23 11:37:59 compute-0 nova_compute[185173]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Jan 23 11:37:59 compute-0 nova_compute[185173]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Jan 23 11:37:59 compute-0 nova_compute[185173]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Jan 23 11:37:59 compute-0 nova_compute[185173]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Jan 23 11:37:59 compute-0 nova_compute[185173]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 23 11:37:59 compute-0 nova_compute[185173]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 23 11:37:59 compute-0 nova_compute[185173]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 23 11:37:59 compute-0 nova_compute[185173]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Jan 23 11:37:59 compute-0 nova_compute[185173]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Jan 23 11:37:59 compute-0 nova_compute[185173]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Jan 23 11:37:59 compute-0 nova_compute[185173]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 23 11:37:59 compute-0 nova_compute[185173]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 23 11:37:59 compute-0 nova_compute[185173]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 23 11:37:59 compute-0 nova_compute[185173]: INFO:__main__:Deleting /etc/ceph
Jan 23 11:37:59 compute-0 nova_compute[185173]: INFO:__main__:Creating directory /etc/ceph
Jan 23 11:37:59 compute-0 nova_compute[185173]: INFO:__main__:Setting permission for /etc/ceph
Jan 23 11:37:59 compute-0 nova_compute[185173]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Jan 23 11:37:59 compute-0 nova_compute[185173]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Jan 23 11:37:59 compute-0 nova_compute[185173]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 23 11:37:59 compute-0 nova_compute[185173]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Jan 23 11:37:59 compute-0 nova_compute[185173]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Jan 23 11:37:59 compute-0 nova_compute[185173]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 23 11:37:59 compute-0 nova_compute[185173]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Jan 23 11:37:59 compute-0 nova_compute[185173]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Jan 23 11:37:59 compute-0 nova_compute[185173]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Jan 23 11:37:59 compute-0 nova_compute[185173]: INFO:__main__:Writing out command to execute
Jan 23 11:37:59 compute-0 nova_compute[185173]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Jan 23 11:37:59 compute-0 nova_compute[185173]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 23 11:37:59 compute-0 nova_compute[185173]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 23 11:37:59 compute-0 nova_compute[185173]: ++ cat /run_command
Jan 23 11:37:59 compute-0 nova_compute[185173]: + CMD=nova-compute
Jan 23 11:37:59 compute-0 nova_compute[185173]: + ARGS=
Jan 23 11:37:59 compute-0 nova_compute[185173]: + sudo kolla_copy_cacerts
Jan 23 11:37:59 compute-0 nova_compute[185173]: + [[ ! -n '' ]]
Jan 23 11:37:59 compute-0 nova_compute[185173]: + . kolla_extend_start
Jan 23 11:37:59 compute-0 nova_compute[185173]: + echo 'Running command: '\''nova-compute'\'''
Jan 23 11:37:59 compute-0 nova_compute[185173]: Running command: 'nova-compute'
Jan 23 11:37:59 compute-0 nova_compute[185173]: + umask 0022
Jan 23 11:37:59 compute-0 nova_compute[185173]: + exec nova-compute
Jan 23 11:37:59 compute-0 sudo[185334]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rmsjuvlihbpovkwawyscqlywyqpaahnf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168279.318131-1287-189124865381518/AnsiballZ_podman_container.py'
Jan 23 11:37:59 compute-0 sudo[185334]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:37:59 compute-0 python3.9[185336]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Jan 23 11:38:00 compute-0 systemd[1]: Started libpod-conmon-9e142cc10497365cd58916dfe8770456c98b43b1ac910456c6f222358992af59.scope.
Jan 23 11:38:00 compute-0 systemd[1]: Started libcrun container.
Jan 23 11:38:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae20dd661ccadcb22d9a13970fb7ab5f82d56ace19c18f5df42b512610817557/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Jan 23 11:38:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae20dd661ccadcb22d9a13970fb7ab5f82d56ace19c18f5df42b512610817557/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Jan 23 11:38:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae20dd661ccadcb22d9a13970fb7ab5f82d56ace19c18f5df42b512610817557/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Jan 23 11:38:00 compute-0 podman[185361]: 2026-01-23 11:38:00.204966633 +0000 UTC m=+0.146751926 container init 9e142cc10497365cd58916dfe8770456c98b43b1ac910456c6f222358992af59 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, container_name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']})
Jan 23 11:38:00 compute-0 podman[185361]: 2026-01-23 11:38:00.213358863 +0000 UTC m=+0.155144136 container start 9e142cc10497365cd58916dfe8770456c98b43b1ac910456c6f222358992af59 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 23 11:38:00 compute-0 python3.9[185336]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Jan 23 11:38:00 compute-0 nova_compute_init[185382]: INFO:nova_statedir:Applying nova statedir ownership
Jan 23 11:38:00 compute-0 nova_compute_init[185382]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Jan 23 11:38:00 compute-0 nova_compute_init[185382]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Jan 23 11:38:00 compute-0 nova_compute_init[185382]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Jan 23 11:38:00 compute-0 nova_compute_init[185382]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Jan 23 11:38:00 compute-0 nova_compute_init[185382]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Jan 23 11:38:00 compute-0 nova_compute_init[185382]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Jan 23 11:38:00 compute-0 nova_compute_init[185382]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Jan 23 11:38:00 compute-0 nova_compute_init[185382]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Jan 23 11:38:00 compute-0 nova_compute_init[185382]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Jan 23 11:38:00 compute-0 nova_compute_init[185382]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Jan 23 11:38:00 compute-0 nova_compute_init[185382]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Jan 23 11:38:00 compute-0 nova_compute_init[185382]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Jan 23 11:38:00 compute-0 nova_compute_init[185382]: INFO:nova_statedir:Nova statedir ownership complete
Jan 23 11:38:00 compute-0 systemd[1]: libpod-9e142cc10497365cd58916dfe8770456c98b43b1ac910456c6f222358992af59.scope: Deactivated successfully.
Jan 23 11:38:00 compute-0 podman[185383]: 2026-01-23 11:38:00.281235592 +0000 UTC m=+0.032602707 container died 9e142cc10497365cd58916dfe8770456c98b43b1ac910456c6f222358992af59 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 23 11:38:00 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9e142cc10497365cd58916dfe8770456c98b43b1ac910456c6f222358992af59-userdata-shm.mount: Deactivated successfully.
Jan 23 11:38:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-ae20dd661ccadcb22d9a13970fb7ab5f82d56ace19c18f5df42b512610817557-merged.mount: Deactivated successfully.
Jan 23 11:38:00 compute-0 podman[185393]: 2026-01-23 11:38:00.335111522 +0000 UTC m=+0.058634130 container cleanup 9e142cc10497365cd58916dfe8770456c98b43b1ac910456c6f222358992af59 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, container_name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 23 11:38:00 compute-0 systemd[1]: libpod-conmon-9e142cc10497365cd58916dfe8770456c98b43b1ac910456c6f222358992af59.scope: Deactivated successfully.
Jan 23 11:38:00 compute-0 sudo[185334]: pam_unix(sudo:session): session closed for user root
Jan 23 11:38:00 compute-0 sshd-session[162146]: Connection closed by 192.168.122.30 port 40282
Jan 23 11:38:00 compute-0 sshd-session[162143]: pam_unix(sshd:session): session closed for user zuul
Jan 23 11:38:00 compute-0 systemd[1]: session-24.scope: Deactivated successfully.
Jan 23 11:38:00 compute-0 systemd[1]: session-24.scope: Consumed 1min 32.090s CPU time.
Jan 23 11:38:00 compute-0 systemd-logind[798]: Session 24 logged out. Waiting for processes to exit.
Jan 23 11:38:00 compute-0 systemd-logind[798]: Removed session 24.
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.073 185177 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.073 185177 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.074 185177 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.074 185177 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.212 185177 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.224 185177 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.224 185177 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.713 185177 INFO nova.virt.driver [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.809 185177 INFO nova.compute.provider_config [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.823 185177 DEBUG oslo_concurrency.lockutils [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.824 185177 DEBUG oslo_concurrency.lockutils [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.824 185177 DEBUG oslo_concurrency.lockutils [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.824 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.825 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.825 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.825 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.825 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.825 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.825 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.825 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.826 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.826 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.826 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.826 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.826 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.826 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.827 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.827 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.827 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.827 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.827 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.827 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.827 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.828 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.828 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.828 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.828 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.828 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.828 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.829 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.829 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.829 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.829 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.829 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.829 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.830 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.830 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.830 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.830 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.830 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.830 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.830 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.831 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.831 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.831 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.831 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.831 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.831 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.832 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.832 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.832 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.832 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.832 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.832 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.833 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.833 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.833 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.833 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.833 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.834 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.834 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.834 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.834 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.834 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.834 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.835 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.835 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.835 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.835 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.835 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.835 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.836 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.836 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.836 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.836 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.836 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.836 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.836 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.837 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.837 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.837 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.837 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.837 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.837 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.838 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.838 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.838 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.838 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.838 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.838 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.838 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.838 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.839 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.839 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.839 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.839 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.839 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.839 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.840 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.840 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.840 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.840 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.840 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.840 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.841 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.841 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.841 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.841 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.841 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.841 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.841 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.842 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.842 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.842 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.842 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.842 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.842 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.843 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.843 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.843 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.843 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.843 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.843 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.843 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.843 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.844 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.844 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.844 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.844 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.844 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.844 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.844 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.845 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.845 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.845 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.845 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.845 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.845 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.846 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.846 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.846 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.846 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.846 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.846 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.846 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.847 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.847 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.847 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.847 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.847 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.847 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.848 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.848 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.848 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.848 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.848 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.848 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.848 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.849 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.849 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.849 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.849 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.849 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.849 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.849 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.850 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.850 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.850 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.850 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.850 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.850 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.850 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.851 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.851 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.851 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.851 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.851 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.851 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.852 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.852 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.852 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.852 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.852 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.852 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.853 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.853 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.853 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.853 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.853 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.853 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.853 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.854 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.854 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.854 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.854 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.854 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.854 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.854 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.855 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.855 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.855 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.855 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.855 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.855 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.856 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.856 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.856 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.856 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.856 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.856 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.856 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.857 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.857 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.857 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.857 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.857 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.857 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.857 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.858 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.858 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.858 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.858 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.858 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.858 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.858 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.859 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.859 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.859 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.859 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.859 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.859 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.860 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.860 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.860 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.860 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.860 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.860 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.861 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.861 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.861 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.861 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.861 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.861 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.861 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.862 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.862 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.862 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.862 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.862 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.862 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.862 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.863 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.863 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.863 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.863 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.863 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.863 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.863 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.864 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.864 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.864 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.864 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.864 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.864 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.864 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.865 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.865 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.865 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.865 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.865 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.865 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.865 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.866 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.866 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.866 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.866 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.866 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.866 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.867 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.867 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.867 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.867 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.867 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.867 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.867 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.868 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.868 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.868 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.868 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.868 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.868 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.869 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.869 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.869 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.869 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.869 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.870 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.870 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.870 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.870 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.870 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.871 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.871 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.871 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.871 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.871 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.871 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.872 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.872 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.872 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.872 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.872 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.872 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.872 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.873 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.873 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.873 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.873 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.873 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.873 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.874 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.874 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.874 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.874 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.874 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.874 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.875 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.875 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.875 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.875 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.875 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.875 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.875 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.876 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.876 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.876 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.876 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.876 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.876 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.877 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.877 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.877 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.877 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.877 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.877 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.877 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.877 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.878 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.878 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.878 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.878 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.878 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.878 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.878 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.879 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.879 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.879 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.879 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.879 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.880 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.880 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.880 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.880 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.880 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.880 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.881 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.881 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.881 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.881 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.881 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.881 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.881 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.881 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.882 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.882 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.882 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.882 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.882 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.882 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.883 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.883 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.883 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.883 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.883 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.883 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.883 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.883 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.884 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.884 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.884 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.884 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.884 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.884 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.885 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.885 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.885 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.885 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.885 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.885 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.885 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.886 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.886 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.886 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.886 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.886 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.886 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.886 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.887 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.887 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.887 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.887 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.887 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.887 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.888 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.888 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.888 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.888 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.888 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.888 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.888 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.889 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.889 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.889 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.889 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.889 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.889 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.889 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.890 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.890 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.890 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.890 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.890 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.890 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.890 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.891 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.891 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.891 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.891 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.891 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.891 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.891 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.892 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.892 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.892 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.892 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.892 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.892 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.892 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.893 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.893 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.893 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.893 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.893 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.893 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.893 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.894 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.894 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.894 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.894 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.894 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.894 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.894 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.895 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.895 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.895 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.895 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.895 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.895 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.895 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.896 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.896 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] libvirt.images_rbd_ceph_conf   =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.896 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.896 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.896 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] libvirt.images_rbd_glance_store_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.896 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] libvirt.images_rbd_pool        = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.896 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] libvirt.images_type            = qcow2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.897 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.897 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.897 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.897 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.897 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.897 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.897 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.898 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.898 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.898 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.898 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.898 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.898 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.898 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.899 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.899 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.899 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.899 185177 WARNING oslo_config.cfg [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Jan 23 11:38:01 compute-0 nova_compute[185173]: live_migration_uri is deprecated for removal in favor of two other options that
Jan 23 11:38:01 compute-0 nova_compute[185173]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Jan 23 11:38:01 compute-0 nova_compute[185173]: and ``live_migration_inbound_addr`` respectively.
Jan 23 11:38:01 compute-0 nova_compute[185173]: ).  Its value may be silently ignored in the future.
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.899 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.900 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.900 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.900 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.900 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.900 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.901 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.901 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.901 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.901 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.901 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.901 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.901 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.902 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.902 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.902 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.902 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.902 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.902 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] libvirt.rbd_secret_uuid        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.902 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] libvirt.rbd_user               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.903 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.903 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.903 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.903 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.903 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.903 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.903 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.904 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.904 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.904 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.904 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.904 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.904 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.905 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.905 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.905 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.905 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.905 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.905 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.906 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.906 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.906 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.906 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.906 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.906 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.906 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.907 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.907 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.907 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.907 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.907 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.907 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.908 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.908 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.908 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.908 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.908 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.908 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.908 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.909 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.909 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.909 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.909 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.909 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.909 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.909 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.909 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.910 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.910 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.910 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.910 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.910 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.910 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.910 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.911 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.911 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.911 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.911 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.911 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.911 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.911 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.912 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.912 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.912 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.912 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.912 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.912 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.912 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.913 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.913 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.913 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.913 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.913 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.913 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.914 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.914 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.914 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.914 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.914 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.914 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.914 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.915 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.915 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.915 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.915 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.915 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.915 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.915 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.916 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.916 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.916 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.916 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.916 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.916 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.916 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.917 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.917 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.917 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.917 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.917 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.917 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.917 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.918 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.918 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.918 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.918 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.918 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.918 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.918 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.919 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.919 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.919 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.919 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.919 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.919 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.919 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.920 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.920 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.920 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.920 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.920 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.921 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.921 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.921 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.921 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.921 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.921 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.921 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.922 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.922 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.922 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.922 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.922 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.922 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.922 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.923 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.923 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.923 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.923 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.923 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.924 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.924 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.924 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.924 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.924 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.924 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.924 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.925 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.925 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.925 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.925 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.925 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.925 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.925 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.926 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.926 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.926 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.926 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.926 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.927 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.927 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.927 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.927 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.927 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.927 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.928 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.928 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.928 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.928 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.928 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.928 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.929 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.929 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.929 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.929 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.929 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.929 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.930 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.930 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.930 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.930 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.930 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.930 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.931 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.931 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.931 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.931 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.931 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.931 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.931 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.932 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.932 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.932 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.932 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.932 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.932 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.932 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.933 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.933 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.933 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.933 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.933 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.933 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.933 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.933 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.934 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.934 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.934 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.934 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.934 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.934 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.934 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.935 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.935 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.935 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.935 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.935 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.935 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.935 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.935 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.936 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.936 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.936 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.936 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.936 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.936 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.937 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.937 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.937 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.937 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.937 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.937 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.938 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.938 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.938 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.938 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.938 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.938 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.938 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.938 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.939 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.939 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.939 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.939 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.939 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.939 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.939 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.940 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.940 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.940 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.940 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.940 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.940 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.940 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.941 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.941 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.941 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.941 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.941 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.941 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.941 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.942 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.942 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.942 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.942 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.942 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.942 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.943 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.943 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.943 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.943 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.943 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.944 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.944 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.944 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.944 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.944 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.945 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.945 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.945 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.945 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.945 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.945 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.945 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.946 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.946 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.946 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.946 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.946 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.946 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.946 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.947 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.947 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.947 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.947 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.947 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.947 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.947 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.948 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.948 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.948 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.948 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.948 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.948 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.949 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.949 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.949 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.949 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.949 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.949 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.950 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.950 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.950 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.950 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.950 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.950 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.950 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.951 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.951 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.951 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.951 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.951 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.951 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.951 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.952 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.952 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.952 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.952 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.952 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.952 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.952 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.952 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.953 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.953 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.953 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.953 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.953 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.953 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.953 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.954 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.954 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.954 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.954 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.954 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.954 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.954 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.954 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.955 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.955 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.955 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.955 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.955 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.955 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.955 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.956 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.956 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.956 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.956 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.956 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.956 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.956 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.957 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.957 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.957 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.957 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.957 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.957 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.957 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.958 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.958 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.958 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.958 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.958 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.958 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.958 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.959 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.959 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.959 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.959 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.959 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.959 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.959 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.960 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.960 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.960 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.960 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.960 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.960 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.960 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.961 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.961 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.961 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.961 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.961 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.961 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.962 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.962 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.962 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.962 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.962 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.962 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.962 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.963 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.963 185177 DEBUG oslo_service.service [None req-02eb06e1-3ac8-4be3-abba-7e74ca79d85c - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.964 185177 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.980 185177 DEBUG nova.virt.libvirt.host [None req-161a4fc3-746d-453a-ae54-40285a29550b - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.980 185177 DEBUG nova.virt.libvirt.host [None req-161a4fc3-746d-453a-ae54-40285a29550b - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.981 185177 DEBUG nova.virt.libvirt.host [None req-161a4fc3-746d-453a-ae54-40285a29550b - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.981 185177 DEBUG nova.virt.libvirt.host [None req-161a4fc3-746d-453a-ae54-40285a29550b - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.992 185177 DEBUG nova.virt.libvirt.host [None req-161a4fc3-746d-453a-ae54-40285a29550b - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7fce58fecd60> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.994 185177 DEBUG nova.virt.libvirt.host [None req-161a4fc3-746d-453a-ae54-40285a29550b - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7fce58fecd60> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Jan 23 11:38:01 compute-0 nova_compute[185173]: 2026-01-23 11:38:01.994 185177 INFO nova.virt.libvirt.driver [None req-161a4fc3-746d-453a-ae54-40285a29550b - - - - - -] Connection event '1' reason 'None'
Jan 23 11:38:02 compute-0 nova_compute[185173]: 2026-01-23 11:38:02.001 185177 INFO nova.virt.libvirt.host [None req-161a4fc3-746d-453a-ae54-40285a29550b - - - - - -] Libvirt host capabilities <capabilities>
Jan 23 11:38:02 compute-0 nova_compute[185173]: 
Jan 23 11:38:02 compute-0 nova_compute[185173]:   <host>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <uuid>850fefef-4162-4be1-a464-e0586d5e52c6</uuid>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <cpu>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <arch>x86_64</arch>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model>EPYC-Rome-v4</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <vendor>AMD</vendor>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <microcode version='16777317'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <signature family='23' model='49' stepping='0'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <maxphysaddr mode='emulate' bits='40'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <feature name='x2apic'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <feature name='tsc-deadline'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <feature name='osxsave'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <feature name='hypervisor'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <feature name='tsc_adjust'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <feature name='spec-ctrl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <feature name='stibp'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <feature name='arch-capabilities'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <feature name='ssbd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <feature name='cmp_legacy'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <feature name='topoext'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <feature name='virt-ssbd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <feature name='lbrv'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <feature name='tsc-scale'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <feature name='vmcb-clean'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <feature name='pause-filter'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <feature name='pfthreshold'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <feature name='svme-addr-chk'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <feature name='rdctl-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <feature name='skip-l1dfl-vmentry'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <feature name='mds-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <feature name='pschange-mc-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <pages unit='KiB' size='4'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <pages unit='KiB' size='2048'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <pages unit='KiB' size='1048576'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     </cpu>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <power_management>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <suspend_mem/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <suspend_disk/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <suspend_hybrid/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     </power_management>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <iommu support='no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <migration_features>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <live/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <uri_transports>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <uri_transport>tcp</uri_transport>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <uri_transport>rdma</uri_transport>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </uri_transports>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     </migration_features>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <topology>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <cells num='1'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <cell id='0'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:           <memory unit='KiB'>7864316</memory>
Jan 23 11:38:02 compute-0 nova_compute[185173]:           <pages unit='KiB' size='4'>1966079</pages>
Jan 23 11:38:02 compute-0 nova_compute[185173]:           <pages unit='KiB' size='2048'>0</pages>
Jan 23 11:38:02 compute-0 nova_compute[185173]:           <pages unit='KiB' size='1048576'>0</pages>
Jan 23 11:38:02 compute-0 nova_compute[185173]:           <distances>
Jan 23 11:38:02 compute-0 nova_compute[185173]:             <sibling id='0' value='10'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:           </distances>
Jan 23 11:38:02 compute-0 nova_compute[185173]:           <cpus num='8'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:           </cpus>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         </cell>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </cells>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     </topology>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <cache>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     </cache>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <secmodel>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model>selinux</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <doi>0</doi>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     </secmodel>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <secmodel>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model>dac</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <doi>0</doi>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <baselabel type='kvm'>+107:+107</baselabel>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <baselabel type='qemu'>+107:+107</baselabel>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     </secmodel>
Jan 23 11:38:02 compute-0 nova_compute[185173]:   </host>
Jan 23 11:38:02 compute-0 nova_compute[185173]: 
Jan 23 11:38:02 compute-0 nova_compute[185173]:   <guest>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <os_type>hvm</os_type>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <arch name='i686'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <wordsize>32</wordsize>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <domain type='qemu'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <domain type='kvm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     </arch>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <features>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <pae/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <nonpae/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <acpi default='on' toggle='yes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <apic default='on' toggle='no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <cpuselection/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <deviceboot/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <disksnapshot default='on' toggle='no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <externalSnapshot/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     </features>
Jan 23 11:38:02 compute-0 nova_compute[185173]:   </guest>
Jan 23 11:38:02 compute-0 nova_compute[185173]: 
Jan 23 11:38:02 compute-0 nova_compute[185173]:   <guest>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <os_type>hvm</os_type>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <arch name='x86_64'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <wordsize>64</wordsize>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <domain type='qemu'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <domain type='kvm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     </arch>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <features>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <acpi default='on' toggle='yes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <apic default='on' toggle='no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <cpuselection/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <deviceboot/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <disksnapshot default='on' toggle='no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <externalSnapshot/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     </features>
Jan 23 11:38:02 compute-0 nova_compute[185173]:   </guest>
Jan 23 11:38:02 compute-0 nova_compute[185173]: 
Jan 23 11:38:02 compute-0 nova_compute[185173]: </capabilities>
Jan 23 11:38:02 compute-0 nova_compute[185173]: 
Jan 23 11:38:02 compute-0 nova_compute[185173]: 2026-01-23 11:38:02.007 185177 DEBUG nova.virt.libvirt.host [None req-161a4fc3-746d-453a-ae54-40285a29550b - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Jan 23 11:38:02 compute-0 nova_compute[185173]: 2026-01-23 11:38:02.009 185177 WARNING nova.virt.libvirt.driver [None req-161a4fc3-746d-453a-ae54-40285a29550b - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Jan 23 11:38:02 compute-0 nova_compute[185173]: 2026-01-23 11:38:02.009 185177 DEBUG nova.virt.libvirt.volume.mount [None req-161a4fc3-746d-453a-ae54-40285a29550b - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Jan 23 11:38:02 compute-0 nova_compute[185173]: 2026-01-23 11:38:02.013 185177 DEBUG nova.virt.libvirt.host [None req-161a4fc3-746d-453a-ae54-40285a29550b - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Jan 23 11:38:02 compute-0 nova_compute[185173]: <domainCapabilities>
Jan 23 11:38:02 compute-0 nova_compute[185173]:   <path>/usr/libexec/qemu-kvm</path>
Jan 23 11:38:02 compute-0 nova_compute[185173]:   <domain>kvm</domain>
Jan 23 11:38:02 compute-0 nova_compute[185173]:   <machine>pc-i440fx-rhel7.6.0</machine>
Jan 23 11:38:02 compute-0 nova_compute[185173]:   <arch>i686</arch>
Jan 23 11:38:02 compute-0 nova_compute[185173]:   <vcpu max='240'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:   <iothreads supported='yes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:   <os supported='yes'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <enum name='firmware'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <loader supported='yes'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <enum name='type'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>rom</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>pflash</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </enum>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <enum name='readonly'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>yes</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>no</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </enum>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <enum name='secure'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>no</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </enum>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     </loader>
Jan 23 11:38:02 compute-0 nova_compute[185173]:   </os>
Jan 23 11:38:02 compute-0 nova_compute[185173]:   <cpu>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <mode name='host-passthrough' supported='yes'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <enum name='hostPassthroughMigratable'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>on</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>off</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </enum>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     </mode>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <mode name='maximum' supported='yes'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <enum name='maximumMigratable'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>on</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>off</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </enum>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     </mode>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <mode name='host-model' supported='yes'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <vendor>AMD</vendor>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <feature policy='require' name='x2apic'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <feature policy='require' name='tsc-deadline'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <feature policy='require' name='hypervisor'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <feature policy='require' name='tsc_adjust'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <feature policy='require' name='spec-ctrl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <feature policy='require' name='stibp'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <feature policy='require' name='ssbd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <feature policy='require' name='cmp_legacy'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <feature policy='require' name='overflow-recov'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <feature policy='require' name='succor'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <feature policy='require' name='ibrs'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <feature policy='require' name='amd-ssbd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <feature policy='require' name='virt-ssbd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <feature policy='require' name='lbrv'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <feature policy='require' name='tsc-scale'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <feature policy='require' name='vmcb-clean'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <feature policy='require' name='flushbyasid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <feature policy='require' name='pause-filter'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <feature policy='require' name='pfthreshold'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <feature policy='require' name='svme-addr-chk'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <feature policy='disable' name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     </mode>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <mode name='custom' supported='yes'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Broadwell'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='hle'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rtm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Broadwell-IBRS'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='hle'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rtm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Broadwell-noTSX'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Broadwell-v1'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='hle'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rtm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Broadwell-v2'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Broadwell-v3'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='hle'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rtm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Broadwell-v4'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Cascadelake-Server'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='hle'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rtm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ibrs-all'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Cascadelake-Server-v1'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='hle'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rtm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Cascadelake-Server-v2'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='hle'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ibrs-all'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rtm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Cascadelake-Server-v3'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ibrs-all'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Cascadelake-Server-v4'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ibrs-all'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Cascadelake-Server-v5'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ibrs-all'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='ClearwaterForest'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-ifma'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-ne-convert'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-vnni-int16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-vnni-int8'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='bhi-ctrl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='bhi-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='bus-lock-detect'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='cldemote'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='cmpccxadd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ddpd-u'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fbsdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrs'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='gfni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ibrs-all'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='intel-psfd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ipred-ctrl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='lam'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='mcdt-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='movdir64b'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='movdiri'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pbrsb-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='prefetchiti'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='psdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rrsba-ctrl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='sbdr-ssdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='serialize'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='sha512'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='sm3'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='sm4'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ss'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vaes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vpclmulqdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='ClearwaterForest-v1'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-ifma'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-ne-convert'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-vnni-int16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-vnni-int8'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='bhi-ctrl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='bhi-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='bus-lock-detect'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='cldemote'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='cmpccxadd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ddpd-u'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fbsdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrs'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='gfni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ibrs-all'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='intel-psfd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ipred-ctrl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='lam'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='mcdt-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='movdir64b'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='movdiri'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pbrsb-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='prefetchiti'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='psdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rrsba-ctrl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='sbdr-ssdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='serialize'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='sha512'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='sm3'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='sm4'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ss'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vaes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vpclmulqdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Cooperlake'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-bf16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='hle'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ibrs-all'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rtm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='taa-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Cooperlake-v1'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-bf16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='hle'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ibrs-all'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rtm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='taa-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Cooperlake-v2'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-bf16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='hle'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ibrs-all'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rtm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='taa-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Denverton'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='mpx'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Denverton-v1'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='mpx'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Denverton-v2'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Denverton-v3'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Dhyana-v2'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='EPYC-Genoa'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amd-psfd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='auto-ibrs'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-bf16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bitalg'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512ifma'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi2'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='gfni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='la57'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='no-nested-data-bp'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='null-sel-clr-base'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='stibp-always-on'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vaes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vpclmulqdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='EPYC-Genoa-v1'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amd-psfd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='auto-ibrs'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-bf16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bitalg'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512ifma'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi2'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='gfni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='la57'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='no-nested-data-bp'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='null-sel-clr-base'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='stibp-always-on'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vaes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vpclmulqdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='EPYC-Genoa-v2'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amd-psfd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='auto-ibrs'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-bf16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bitalg'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512ifma'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi2'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fs-gs-base-ns'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='gfni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='la57'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='no-nested-data-bp'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='null-sel-clr-base'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='perfmon-v2'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='stibp-always-on'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vaes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vpclmulqdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='EPYC-Milan'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='EPYC-Milan-v1'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='EPYC-Milan-v2'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amd-psfd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='no-nested-data-bp'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='null-sel-clr-base'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='stibp-always-on'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vaes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vpclmulqdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='EPYC-Milan-v3'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amd-psfd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='no-nested-data-bp'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='null-sel-clr-base'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='stibp-always-on'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vaes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vpclmulqdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='EPYC-Rome'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='EPYC-Rome-v1'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='EPYC-Rome-v2'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='EPYC-Rome-v3'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='EPYC-Turin'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amd-psfd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='auto-ibrs'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-bf16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-vp2intersect'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bitalg'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512ifma'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi2'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fs-gs-base-ns'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='gfni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ibpb-brtype'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='la57'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='movdir64b'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='movdiri'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='no-nested-data-bp'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='null-sel-clr-base'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='perfmon-v2'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='prefetchi'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='sbpb'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='srso-user-kernel-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='stibp-always-on'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vaes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vpclmulqdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='EPYC-Turin-v1'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amd-psfd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='auto-ibrs'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-bf16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-vp2intersect'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bitalg'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512ifma'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi2'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fs-gs-base-ns'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='gfni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ibpb-brtype'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='la57'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='movdir64b'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='movdiri'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='no-nested-data-bp'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='null-sel-clr-base'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='perfmon-v2'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='prefetchi'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='sbpb'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='srso-user-kernel-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='stibp-always-on'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vaes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vpclmulqdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='EPYC-v3'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='EPYC-v4'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='EPYC-v5'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='GraniteRapids'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amx-bf16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amx-fp16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amx-int8'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amx-tile'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-bf16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-fp16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bitalg'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512ifma'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi2'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='bus-lock-detect'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fbsdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrc'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrs'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fzrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='gfni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='hle'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ibrs-all'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='la57'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='mcdt-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pbrsb-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='prefetchiti'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='psdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rtm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='sbdr-ssdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='serialize'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='taa-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='tsx-ldtrk'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vaes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vpclmulqdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xfd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='GraniteRapids-v1'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amx-bf16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amx-fp16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amx-int8'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amx-tile'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-bf16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-fp16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bitalg'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512ifma'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi2'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='bus-lock-detect'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fbsdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrc'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrs'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fzrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='gfni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='hle'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ibrs-all'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='la57'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='mcdt-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pbrsb-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='prefetchiti'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='psdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rtm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='sbdr-ssdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='serialize'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='taa-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='tsx-ldtrk'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vaes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vpclmulqdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xfd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='GraniteRapids-v2'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amx-bf16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amx-fp16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amx-int8'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amx-tile'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx10'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx10-128'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx10-256'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx10-512'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-bf16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-fp16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bitalg'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512ifma'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi2'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='bus-lock-detect'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='cldemote'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fbsdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrc'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrs'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fzrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='gfni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='hle'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ibrs-all'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='la57'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='mcdt-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='movdir64b'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='movdiri'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pbrsb-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='prefetchiti'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='psdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rtm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='sbdr-ssdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='serialize'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ss'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='taa-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='tsx-ldtrk'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vaes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vpclmulqdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xfd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='GraniteRapids-v3'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amx-bf16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amx-fp16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amx-int8'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amx-tile'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx10'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx10-128'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx10-256'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx10-512'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-bf16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-fp16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bitalg'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512ifma'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi2'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='bus-lock-detect'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='cldemote'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fbsdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrc'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrs'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fzrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='gfni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='hle'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ibrs-all'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='la57'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='mcdt-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='movdir64b'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='movdiri'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pbrsb-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='prefetchiti'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='psdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rtm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='sbdr-ssdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='serialize'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ss'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='taa-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='tsx-ldtrk'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vaes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vpclmulqdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xfd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Haswell'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='hle'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rtm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Haswell-IBRS'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='hle'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rtm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Haswell-noTSX'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Haswell-v1'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='hle'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rtm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Haswell-v2'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Haswell-v3'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='hle'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rtm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Haswell-v4'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Icelake-Server'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bitalg'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi2'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='gfni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='hle'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='la57'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rtm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vaes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vpclmulqdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Icelake-Server-noTSX'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bitalg'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi2'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='gfni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='la57'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vaes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vpclmulqdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Icelake-Server-v1'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bitalg'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi2'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='gfni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='hle'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='la57'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rtm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vaes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vpclmulqdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Icelake-Server-v2'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bitalg'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi2'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='gfni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='la57'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vaes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vpclmulqdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Icelake-Server-v3'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bitalg'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi2'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='gfni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ibrs-all'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='la57'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='taa-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vaes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vpclmulqdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Icelake-Server-v4'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bitalg'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512ifma'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi2'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='gfni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ibrs-all'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='la57'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='taa-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vaes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vpclmulqdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Icelake-Server-v5'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bitalg'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512ifma'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi2'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='gfni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ibrs-all'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='la57'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='taa-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vaes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vpclmulqdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Icelake-Server-v6'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bitalg'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512ifma'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi2'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='gfni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ibrs-all'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='la57'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='taa-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vaes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vpclmulqdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Icelake-Server-v7'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bitalg'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512ifma'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi2'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='gfni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='hle'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ibrs-all'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='la57'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rtm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='taa-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vaes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vpclmulqdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='IvyBridge'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='IvyBridge-IBRS'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='IvyBridge-v1'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='IvyBridge-v2'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='KnightsMill'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-4fmaps'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-4vnniw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512er'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512pf'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ss'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='KnightsMill-v1'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-4fmaps'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-4vnniw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512er'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512pf'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ss'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Opteron_G4'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fma4'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xop'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Opteron_G4-v1'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fma4'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xop'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Opteron_G5'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fma4'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='tbm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xop'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Opteron_G5-v1'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fma4'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='tbm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xop'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='SapphireRapids'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amx-bf16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amx-int8'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amx-tile'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-bf16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-fp16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bitalg'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512ifma'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi2'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='bus-lock-detect'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrc'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrs'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fzrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='gfni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='hle'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ibrs-all'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='la57'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rtm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='serialize'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='taa-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='tsx-ldtrk'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vaes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vpclmulqdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xfd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='SapphireRapids-v1'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amx-bf16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amx-int8'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amx-tile'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-bf16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-fp16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bitalg'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512ifma'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi2'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='bus-lock-detect'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrc'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrs'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fzrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='gfni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='hle'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ibrs-all'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='la57'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rtm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='serialize'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='taa-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='tsx-ldtrk'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vaes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vpclmulqdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xfd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='SapphireRapids-v2'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amx-bf16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amx-int8'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amx-tile'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-bf16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-fp16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bitalg'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512ifma'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi2'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='bus-lock-detect'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fbsdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrc'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrs'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fzrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='gfni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='hle'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ibrs-all'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='la57'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='psdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rtm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='sbdr-ssdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='serialize'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='taa-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='tsx-ldtrk'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vaes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vpclmulqdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xfd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='SapphireRapids-v3'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amx-bf16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amx-int8'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amx-tile'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-bf16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-fp16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bitalg'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512ifma'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi2'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='bus-lock-detect'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='cldemote'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fbsdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrc'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrs'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fzrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='gfni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='hle'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ibrs-all'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='la57'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='movdir64b'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='movdiri'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='psdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rtm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='sbdr-ssdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='serialize'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ss'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='taa-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='tsx-ldtrk'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vaes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vpclmulqdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xfd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='SapphireRapids-v4'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amx-bf16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amx-int8'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amx-tile'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-bf16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-fp16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bitalg'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512ifma'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi2'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='bus-lock-detect'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='cldemote'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fbsdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrc'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrs'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fzrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='gfni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='hle'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ibrs-all'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='la57'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='movdir64b'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='movdiri'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='psdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rtm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='sbdr-ssdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='serialize'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ss'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='taa-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='tsx-ldtrk'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vaes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vpclmulqdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xfd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='SierraForest'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-ifma'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-ne-convert'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-vnni-int8'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='bus-lock-detect'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='cmpccxadd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fbsdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrs'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='gfni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ibrs-all'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='mcdt-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pbrsb-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='psdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='sbdr-ssdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='serialize'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vaes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vpclmulqdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='SierraForest-v1'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-ifma'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-ne-convert'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-vnni-int8'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='bus-lock-detect'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='cmpccxadd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fbsdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrs'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='gfni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ibrs-all'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='mcdt-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pbrsb-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='psdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='sbdr-ssdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='serialize'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vaes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vpclmulqdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='SierraForest-v2'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-ifma'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-ne-convert'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-vnni-int8'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='bhi-ctrl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='bus-lock-detect'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='cldemote'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='cmpccxadd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fbsdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrs'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='gfni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ibrs-all'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='intel-psfd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ipred-ctrl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='lam'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='mcdt-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='movdir64b'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='movdiri'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pbrsb-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='psdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rrsba-ctrl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='sbdr-ssdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='serialize'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ss'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vaes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vpclmulqdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='SierraForest-v3'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-ifma'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-ne-convert'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-vnni-int8'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='bhi-ctrl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='bus-lock-detect'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='cldemote'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='cmpccxadd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fbsdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrs'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='gfni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ibrs-all'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='intel-psfd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ipred-ctrl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='lam'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='mcdt-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='movdir64b'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='movdiri'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pbrsb-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='psdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rrsba-ctrl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='sbdr-ssdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='serialize'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ss'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vaes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vpclmulqdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Skylake-Client'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='hle'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rtm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Skylake-Client-IBRS'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='hle'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rtm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Skylake-Client-v1'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='hle'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rtm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Skylake-Client-v2'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='hle'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rtm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Skylake-Client-v3'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Skylake-Client-v4'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Skylake-Server'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='hle'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rtm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Skylake-Server-IBRS'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='hle'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rtm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Skylake-Server-v1'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='hle'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rtm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Skylake-Server-v2'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='hle'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rtm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Skylake-Server-v3'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Skylake-Server-v4'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Skylake-Server-v5'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Snowridge'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='cldemote'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='core-capability'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='gfni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='movdir64b'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='movdiri'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='mpx'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='split-lock-detect'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Snowridge-v1'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='cldemote'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='core-capability'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='gfni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='movdir64b'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='movdiri'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='mpx'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='split-lock-detect'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Snowridge-v2'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='cldemote'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='core-capability'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='gfni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='movdir64b'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='movdiri'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='split-lock-detect'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Snowridge-v3'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='cldemote'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='core-capability'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='gfni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='movdir64b'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='movdiri'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='split-lock-detect'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Snowridge-v4'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='cldemote'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='gfni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='movdir64b'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='movdiri'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='athlon'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='3dnow'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='3dnowext'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='athlon-v1'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='3dnow'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='3dnowext'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='core2duo'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ss'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='core2duo-v1'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ss'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='coreduo'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ss'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='coreduo-v1'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ss'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='n270'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ss'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='n270-v1'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ss'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='phenom'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='3dnow'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='3dnowext'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='phenom-v1'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='3dnow'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='3dnowext'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     </mode>
Jan 23 11:38:02 compute-0 nova_compute[185173]:   </cpu>
Jan 23 11:38:02 compute-0 nova_compute[185173]:   <memoryBacking supported='yes'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <enum name='sourceType'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <value>file</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <value>anonymous</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <value>memfd</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     </enum>
Jan 23 11:38:02 compute-0 nova_compute[185173]:   </memoryBacking>
Jan 23 11:38:02 compute-0 nova_compute[185173]:   <devices>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <disk supported='yes'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <enum name='diskDevice'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>disk</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>cdrom</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>floppy</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>lun</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </enum>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <enum name='bus'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>ide</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>fdc</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>scsi</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>virtio</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>usb</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>sata</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </enum>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <enum name='model'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>virtio</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>virtio-transitional</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>virtio-non-transitional</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </enum>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     </disk>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <graphics supported='yes'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <enum name='type'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>vnc</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>egl-headless</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>dbus</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </enum>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     </graphics>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <video supported='yes'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <enum name='modelType'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>vga</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>cirrus</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>virtio</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>none</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>bochs</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>ramfb</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </enum>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     </video>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <hostdev supported='yes'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <enum name='mode'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>subsystem</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </enum>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <enum name='startupPolicy'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>default</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>mandatory</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>requisite</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>optional</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </enum>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <enum name='subsysType'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>usb</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>pci</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>scsi</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </enum>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <enum name='capsType'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <enum name='pciBackend'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     </hostdev>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <rng supported='yes'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <enum name='model'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>virtio</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>virtio-transitional</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>virtio-non-transitional</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </enum>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <enum name='backendModel'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>random</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>egd</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>builtin</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </enum>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     </rng>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <filesystem supported='yes'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <enum name='driverType'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>path</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>handle</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>virtiofs</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </enum>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     </filesystem>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <tpm supported='yes'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <enum name='model'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>tpm-tis</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>tpm-crb</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </enum>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <enum name='backendModel'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>emulator</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>external</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </enum>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <enum name='backendVersion'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>2.0</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </enum>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     </tpm>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <redirdev supported='yes'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <enum name='bus'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>usb</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </enum>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     </redirdev>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <channel supported='yes'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <enum name='type'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>pty</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>unix</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </enum>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     </channel>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <crypto supported='yes'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <enum name='model'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <enum name='type'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>qemu</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </enum>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <enum name='backendModel'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>builtin</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </enum>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     </crypto>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <interface supported='yes'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <enum name='backendType'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>default</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>passt</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </enum>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     </interface>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <panic supported='yes'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <enum name='model'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>isa</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>hyperv</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </enum>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     </panic>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <console supported='yes'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <enum name='type'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>null</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>vc</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>pty</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>dev</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>file</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>pipe</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>stdio</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>udp</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>tcp</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>unix</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>qemu-vdagent</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>dbus</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </enum>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     </console>
Jan 23 11:38:02 compute-0 nova_compute[185173]:   </devices>
Jan 23 11:38:02 compute-0 nova_compute[185173]:   <features>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <gic supported='no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <vmcoreinfo supported='yes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <genid supported='yes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <backingStoreInput supported='yes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <backup supported='yes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <async-teardown supported='yes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <s390-pv supported='no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <ps2 supported='yes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <tdx supported='no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <sev supported='no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <sgx supported='no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <hyperv supported='yes'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <enum name='features'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>relaxed</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>vapic</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>spinlocks</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>vpindex</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>runtime</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>synic</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>stimer</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>reset</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>vendor_id</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>frequencies</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>reenlightenment</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>tlbflush</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>ipi</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>avic</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>emsr_bitmap</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>xmm_input</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </enum>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <defaults>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <spinlocks>4095</spinlocks>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <stimer_direct>on</stimer_direct>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <tlbflush_direct>on</tlbflush_direct>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <tlbflush_extended>on</tlbflush_extended>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </defaults>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     </hyperv>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <launchSecurity supported='no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:   </features>
Jan 23 11:38:02 compute-0 nova_compute[185173]: </domainCapabilities>
Jan 23 11:38:02 compute-0 nova_compute[185173]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 23 11:38:02 compute-0 nova_compute[185173]: 2026-01-23 11:38:02.024 185177 DEBUG nova.virt.libvirt.host [None req-161a4fc3-746d-453a-ae54-40285a29550b - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Jan 23 11:38:02 compute-0 nova_compute[185173]: <domainCapabilities>
Jan 23 11:38:02 compute-0 nova_compute[185173]:   <path>/usr/libexec/qemu-kvm</path>
Jan 23 11:38:02 compute-0 nova_compute[185173]:   <domain>kvm</domain>
Jan 23 11:38:02 compute-0 nova_compute[185173]:   <machine>pc-q35-rhel9.8.0</machine>
Jan 23 11:38:02 compute-0 nova_compute[185173]:   <arch>i686</arch>
Jan 23 11:38:02 compute-0 nova_compute[185173]:   <vcpu max='4096'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:   <iothreads supported='yes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:   <os supported='yes'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <enum name='firmware'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <loader supported='yes'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <enum name='type'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>rom</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>pflash</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </enum>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <enum name='readonly'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>yes</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>no</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </enum>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <enum name='secure'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>no</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </enum>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     </loader>
Jan 23 11:38:02 compute-0 nova_compute[185173]:   </os>
Jan 23 11:38:02 compute-0 nova_compute[185173]:   <cpu>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <mode name='host-passthrough' supported='yes'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <enum name='hostPassthroughMigratable'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>on</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>off</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </enum>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     </mode>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <mode name='maximum' supported='yes'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <enum name='maximumMigratable'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>on</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>off</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </enum>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     </mode>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <mode name='host-model' supported='yes'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <vendor>AMD</vendor>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <feature policy='require' name='x2apic'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <feature policy='require' name='tsc-deadline'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <feature policy='require' name='hypervisor'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <feature policy='require' name='tsc_adjust'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <feature policy='require' name='spec-ctrl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <feature policy='require' name='stibp'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <feature policy='require' name='ssbd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <feature policy='require' name='cmp_legacy'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <feature policy='require' name='overflow-recov'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <feature policy='require' name='succor'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <feature policy='require' name='ibrs'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <feature policy='require' name='amd-ssbd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <feature policy='require' name='virt-ssbd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <feature policy='require' name='lbrv'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <feature policy='require' name='tsc-scale'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <feature policy='require' name='vmcb-clean'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <feature policy='require' name='flushbyasid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <feature policy='require' name='pause-filter'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <feature policy='require' name='pfthreshold'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <feature policy='require' name='svme-addr-chk'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <feature policy='disable' name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     </mode>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <mode name='custom' supported='yes'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Broadwell'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='hle'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rtm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Broadwell-IBRS'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='hle'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rtm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Broadwell-noTSX'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Broadwell-v1'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='hle'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rtm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Broadwell-v2'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Broadwell-v3'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='hle'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rtm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Broadwell-v4'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Cascadelake-Server'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='hle'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rtm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ibrs-all'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Cascadelake-Server-v1'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='hle'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rtm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Cascadelake-Server-v2'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='hle'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ibrs-all'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rtm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Cascadelake-Server-v3'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ibrs-all'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Cascadelake-Server-v4'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ibrs-all'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Cascadelake-Server-v5'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ibrs-all'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='ClearwaterForest'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-ifma'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-ne-convert'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-vnni-int16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-vnni-int8'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='bhi-ctrl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='bhi-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='bus-lock-detect'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='cldemote'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='cmpccxadd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ddpd-u'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fbsdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrs'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='gfni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ibrs-all'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='intel-psfd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ipred-ctrl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='lam'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='mcdt-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='movdir64b'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='movdiri'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pbrsb-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='prefetchiti'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='psdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rrsba-ctrl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='sbdr-ssdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='serialize'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='sha512'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='sm3'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='sm4'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ss'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vaes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vpclmulqdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='ClearwaterForest-v1'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-ifma'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-ne-convert'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-vnni-int16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-vnni-int8'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='bhi-ctrl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='bhi-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='bus-lock-detect'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='cldemote'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='cmpccxadd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ddpd-u'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fbsdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrs'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='gfni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ibrs-all'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='intel-psfd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ipred-ctrl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='lam'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='mcdt-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='movdir64b'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='movdiri'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pbrsb-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='prefetchiti'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='psdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rrsba-ctrl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='sbdr-ssdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='serialize'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='sha512'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='sm3'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='sm4'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ss'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vaes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vpclmulqdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Cooperlake'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-bf16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='hle'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ibrs-all'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rtm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='taa-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Cooperlake-v1'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-bf16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='hle'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ibrs-all'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rtm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='taa-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Cooperlake-v2'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-bf16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='hle'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ibrs-all'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rtm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='taa-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Denverton'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='mpx'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Denverton-v1'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='mpx'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Denverton-v2'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Denverton-v3'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Dhyana-v2'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='EPYC-Genoa'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amd-psfd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='auto-ibrs'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-bf16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bitalg'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512ifma'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi2'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='gfni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='la57'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='no-nested-data-bp'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='null-sel-clr-base'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='stibp-always-on'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vaes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vpclmulqdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='EPYC-Genoa-v1'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amd-psfd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='auto-ibrs'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-bf16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bitalg'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512ifma'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi2'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='gfni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='la57'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='no-nested-data-bp'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='null-sel-clr-base'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='stibp-always-on'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vaes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vpclmulqdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='EPYC-Genoa-v2'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amd-psfd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='auto-ibrs'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-bf16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bitalg'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512ifma'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi2'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fs-gs-base-ns'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='gfni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='la57'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='no-nested-data-bp'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='null-sel-clr-base'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='perfmon-v2'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='stibp-always-on'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vaes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vpclmulqdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='EPYC-Milan'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='EPYC-Milan-v1'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='EPYC-Milan-v2'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amd-psfd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='no-nested-data-bp'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='null-sel-clr-base'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='stibp-always-on'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vaes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vpclmulqdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='EPYC-Milan-v3'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amd-psfd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='no-nested-data-bp'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='null-sel-clr-base'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='stibp-always-on'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vaes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vpclmulqdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='EPYC-Rome'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='EPYC-Rome-v1'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='EPYC-Rome-v2'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='EPYC-Rome-v3'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='EPYC-Turin'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amd-psfd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='auto-ibrs'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-bf16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-vp2intersect'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bitalg'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512ifma'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi2'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fs-gs-base-ns'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='gfni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ibpb-brtype'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='la57'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='movdir64b'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='movdiri'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='no-nested-data-bp'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='null-sel-clr-base'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='perfmon-v2'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='prefetchi'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='sbpb'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='srso-user-kernel-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='stibp-always-on'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vaes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vpclmulqdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='EPYC-Turin-v1'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amd-psfd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='auto-ibrs'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-bf16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-vp2intersect'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bitalg'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512ifma'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi2'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fs-gs-base-ns'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='gfni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ibpb-brtype'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='la57'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='movdir64b'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='movdiri'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='no-nested-data-bp'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='null-sel-clr-base'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='perfmon-v2'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='prefetchi'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='sbpb'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='srso-user-kernel-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='stibp-always-on'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vaes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vpclmulqdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='EPYC-v3'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='EPYC-v4'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='EPYC-v5'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='GraniteRapids'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amx-bf16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amx-fp16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amx-int8'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amx-tile'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-bf16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-fp16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bitalg'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512ifma'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi2'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='bus-lock-detect'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fbsdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrc'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrs'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fzrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='gfni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='hle'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ibrs-all'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='la57'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='mcdt-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pbrsb-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='prefetchiti'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='psdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rtm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='sbdr-ssdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='serialize'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='taa-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='tsx-ldtrk'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vaes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vpclmulqdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xfd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='GraniteRapids-v1'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amx-bf16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amx-fp16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amx-int8'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amx-tile'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-bf16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-fp16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bitalg'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512ifma'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi2'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='bus-lock-detect'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fbsdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrc'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrs'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fzrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='gfni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='hle'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ibrs-all'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='la57'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='mcdt-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pbrsb-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='prefetchiti'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='psdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rtm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='sbdr-ssdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='serialize'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='taa-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='tsx-ldtrk'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vaes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vpclmulqdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xfd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='GraniteRapids-v2'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amx-bf16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amx-fp16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amx-int8'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amx-tile'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx10'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx10-128'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx10-256'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx10-512'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-bf16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-fp16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bitalg'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512ifma'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi2'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='bus-lock-detect'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='cldemote'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fbsdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrc'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrs'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fzrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='gfni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='hle'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ibrs-all'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='la57'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='mcdt-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='movdir64b'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='movdiri'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pbrsb-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='prefetchiti'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='psdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rtm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='sbdr-ssdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='serialize'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ss'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='taa-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='tsx-ldtrk'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vaes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vpclmulqdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xfd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='GraniteRapids-v3'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amx-bf16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amx-fp16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amx-int8'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amx-tile'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx10'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx10-128'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx10-256'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx10-512'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-bf16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-fp16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bitalg'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512ifma'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi2'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='bus-lock-detect'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='cldemote'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fbsdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrc'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrs'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fzrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='gfni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='hle'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ibrs-all'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='la57'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='mcdt-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='movdir64b'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='movdiri'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pbrsb-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='prefetchiti'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='psdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rtm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='sbdr-ssdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='serialize'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ss'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='taa-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='tsx-ldtrk'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vaes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vpclmulqdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xfd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Haswell'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='hle'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rtm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Haswell-IBRS'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='hle'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rtm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Haswell-noTSX'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Haswell-v1'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='hle'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rtm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Haswell-v2'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Haswell-v3'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='hle'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rtm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Haswell-v4'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Icelake-Server'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bitalg'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi2'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='gfni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='hle'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='la57'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rtm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vaes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vpclmulqdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Icelake-Server-noTSX'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bitalg'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi2'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='gfni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='la57'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vaes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vpclmulqdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Icelake-Server-v1'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bitalg'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi2'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='gfni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='hle'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='la57'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rtm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vaes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vpclmulqdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Icelake-Server-v2'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bitalg'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi2'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='gfni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='la57'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vaes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vpclmulqdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Icelake-Server-v3'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bitalg'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi2'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='gfni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ibrs-all'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='la57'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='taa-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vaes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vpclmulqdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Icelake-Server-v4'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bitalg'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512ifma'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi2'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='gfni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ibrs-all'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='la57'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='taa-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vaes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vpclmulqdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Icelake-Server-v5'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bitalg'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512ifma'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi2'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='gfni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ibrs-all'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='la57'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='taa-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vaes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vpclmulqdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Icelake-Server-v6'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bitalg'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512ifma'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi2'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='gfni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ibrs-all'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='la57'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='taa-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vaes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vpclmulqdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Icelake-Server-v7'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bitalg'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512ifma'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi2'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='gfni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='hle'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ibrs-all'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='la57'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rtm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='taa-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vaes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vpclmulqdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='IvyBridge'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='IvyBridge-IBRS'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='IvyBridge-v1'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='IvyBridge-v2'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='KnightsMill'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-4fmaps'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-4vnniw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512er'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512pf'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ss'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='KnightsMill-v1'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-4fmaps'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-4vnniw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512er'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512pf'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ss'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Opteron_G4'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fma4'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xop'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Opteron_G4-v1'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fma4'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xop'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Opteron_G5'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fma4'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='tbm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xop'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Opteron_G5-v1'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fma4'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='tbm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xop'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='SapphireRapids'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amx-bf16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amx-int8'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amx-tile'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-bf16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-fp16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bitalg'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512ifma'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi2'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='bus-lock-detect'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrc'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrs'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fzrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='gfni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='hle'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ibrs-all'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='la57'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rtm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='serialize'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='taa-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='tsx-ldtrk'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vaes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vpclmulqdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xfd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='SapphireRapids-v1'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amx-bf16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amx-int8'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amx-tile'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-bf16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-fp16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bitalg'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512ifma'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi2'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='bus-lock-detect'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrc'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrs'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fzrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='gfni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='hle'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ibrs-all'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='la57'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rtm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='serialize'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='taa-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='tsx-ldtrk'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vaes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vpclmulqdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xfd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='SapphireRapids-v2'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amx-bf16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amx-int8'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amx-tile'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-bf16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-fp16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bitalg'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512ifma'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi2'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='bus-lock-detect'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fbsdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrc'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrs'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fzrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='gfni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='hle'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ibrs-all'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='la57'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='psdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rtm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='sbdr-ssdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='serialize'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='taa-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='tsx-ldtrk'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vaes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vpclmulqdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xfd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='SapphireRapids-v3'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amx-bf16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amx-int8'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amx-tile'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-bf16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-fp16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bitalg'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512ifma'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi2'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='bus-lock-detect'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='cldemote'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fbsdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrc'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrs'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fzrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='gfni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='hle'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ibrs-all'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='la57'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='movdir64b'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='movdiri'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='psdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rtm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='sbdr-ssdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='serialize'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ss'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='taa-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='tsx-ldtrk'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vaes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vpclmulqdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xfd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='SapphireRapids-v4'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amx-bf16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amx-int8'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amx-tile'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-bf16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-fp16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bitalg'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512ifma'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi2'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='bus-lock-detect'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='cldemote'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fbsdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrc'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrs'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fzrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='gfni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='hle'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ibrs-all'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='la57'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='movdir64b'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='movdiri'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='psdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rtm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='sbdr-ssdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='serialize'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ss'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='taa-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='tsx-ldtrk'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vaes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vpclmulqdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xfd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='SierraForest'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-ifma'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-ne-convert'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-vnni-int8'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='bus-lock-detect'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='cmpccxadd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fbsdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrs'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='gfni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ibrs-all'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='mcdt-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pbrsb-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='psdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='sbdr-ssdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='serialize'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vaes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vpclmulqdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='SierraForest-v1'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-ifma'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-ne-convert'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-vnni-int8'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='bus-lock-detect'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='cmpccxadd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fbsdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrs'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='gfni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ibrs-all'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='mcdt-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pbrsb-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='psdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='sbdr-ssdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='serialize'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vaes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vpclmulqdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='SierraForest-v2'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-ifma'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-ne-convert'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-vnni-int8'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='bhi-ctrl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='bus-lock-detect'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='cldemote'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='cmpccxadd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fbsdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrs'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='gfni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ibrs-all'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='intel-psfd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ipred-ctrl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='lam'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='mcdt-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='movdir64b'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='movdiri'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pbrsb-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='psdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rrsba-ctrl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='sbdr-ssdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='serialize'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ss'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vaes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vpclmulqdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='SierraForest-v3'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-ifma'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-ne-convert'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-vnni-int8'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='bhi-ctrl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='bus-lock-detect'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='cldemote'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='cmpccxadd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fbsdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrs'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='gfni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ibrs-all'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='intel-psfd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ipred-ctrl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='lam'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='mcdt-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='movdir64b'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='movdiri'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pbrsb-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='psdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rrsba-ctrl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='sbdr-ssdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='serialize'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ss'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vaes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vpclmulqdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Skylake-Client'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='hle'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rtm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Skylake-Client-IBRS'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='hle'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rtm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Skylake-Client-v1'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='hle'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rtm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Skylake-Client-v2'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='hle'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rtm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Skylake-Client-v3'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Skylake-Client-v4'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Skylake-Server'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='hle'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rtm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Skylake-Server-IBRS'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='hle'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rtm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Skylake-Server-v1'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='hle'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rtm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Skylake-Server-v2'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='hle'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rtm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Skylake-Server-v3'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Skylake-Server-v4'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Skylake-Server-v5'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Snowridge'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='cldemote'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='core-capability'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='gfni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='movdir64b'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='movdiri'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='mpx'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='split-lock-detect'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Snowridge-v1'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='cldemote'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='core-capability'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='gfni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='movdir64b'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='movdiri'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='mpx'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='split-lock-detect'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Snowridge-v2'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='cldemote'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='core-capability'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='gfni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='movdir64b'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='movdiri'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='split-lock-detect'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Snowridge-v3'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='cldemote'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='core-capability'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='gfni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='movdir64b'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='movdiri'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='split-lock-detect'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Snowridge-v4'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='cldemote'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='gfni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='movdir64b'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='movdiri'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='athlon'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='3dnow'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='3dnowext'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='athlon-v1'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='3dnow'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='3dnowext'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='core2duo'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ss'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='core2duo-v1'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ss'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='coreduo'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ss'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='coreduo-v1'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ss'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='n270'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ss'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='n270-v1'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ss'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='phenom'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='3dnow'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='3dnowext'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='phenom-v1'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='3dnow'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='3dnowext'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     </mode>
Jan 23 11:38:02 compute-0 nova_compute[185173]:   </cpu>
Jan 23 11:38:02 compute-0 nova_compute[185173]:   <memoryBacking supported='yes'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <enum name='sourceType'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <value>file</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <value>anonymous</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <value>memfd</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     </enum>
Jan 23 11:38:02 compute-0 nova_compute[185173]:   </memoryBacking>
Jan 23 11:38:02 compute-0 nova_compute[185173]:   <devices>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <disk supported='yes'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <enum name='diskDevice'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>disk</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>cdrom</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>floppy</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>lun</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </enum>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <enum name='bus'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>fdc</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>scsi</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>virtio</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>usb</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>sata</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </enum>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <enum name='model'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>virtio</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>virtio-transitional</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>virtio-non-transitional</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </enum>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     </disk>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <graphics supported='yes'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <enum name='type'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>vnc</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>egl-headless</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>dbus</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </enum>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     </graphics>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <video supported='yes'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <enum name='modelType'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>vga</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>cirrus</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>virtio</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>none</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>bochs</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>ramfb</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </enum>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     </video>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <hostdev supported='yes'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <enum name='mode'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>subsystem</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </enum>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <enum name='startupPolicy'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>default</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>mandatory</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>requisite</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>optional</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </enum>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <enum name='subsysType'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>usb</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>pci</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>scsi</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </enum>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <enum name='capsType'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <enum name='pciBackend'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     </hostdev>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <rng supported='yes'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <enum name='model'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>virtio</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>virtio-transitional</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>virtio-non-transitional</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </enum>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <enum name='backendModel'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>random</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>egd</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>builtin</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </enum>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     </rng>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <filesystem supported='yes'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <enum name='driverType'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>path</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>handle</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>virtiofs</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </enum>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     </filesystem>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <tpm supported='yes'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <enum name='model'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>tpm-tis</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>tpm-crb</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </enum>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <enum name='backendModel'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>emulator</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>external</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </enum>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <enum name='backendVersion'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>2.0</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </enum>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     </tpm>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <redirdev supported='yes'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <enum name='bus'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>usb</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </enum>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     </redirdev>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <channel supported='yes'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <enum name='type'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>pty</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>unix</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </enum>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     </channel>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <crypto supported='yes'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <enum name='model'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <enum name='type'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>qemu</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </enum>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <enum name='backendModel'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>builtin</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </enum>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     </crypto>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <interface supported='yes'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <enum name='backendType'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>default</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>passt</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </enum>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     </interface>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <panic supported='yes'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <enum name='model'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>isa</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>hyperv</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </enum>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     </panic>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <console supported='yes'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <enum name='type'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>null</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>vc</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>pty</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>dev</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>file</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>pipe</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>stdio</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>udp</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>tcp</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>unix</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>qemu-vdagent</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>dbus</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </enum>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     </console>
Jan 23 11:38:02 compute-0 nova_compute[185173]:   </devices>
Jan 23 11:38:02 compute-0 nova_compute[185173]:   <features>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <gic supported='no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <vmcoreinfo supported='yes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <genid supported='yes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <backingStoreInput supported='yes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <backup supported='yes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <async-teardown supported='yes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <s390-pv supported='no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <ps2 supported='yes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <tdx supported='no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <sev supported='no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <sgx supported='no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <hyperv supported='yes'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <enum name='features'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>relaxed</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>vapic</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>spinlocks</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>vpindex</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>runtime</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>synic</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>stimer</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>reset</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>vendor_id</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>frequencies</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>reenlightenment</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>tlbflush</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>ipi</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>avic</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>emsr_bitmap</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>xmm_input</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </enum>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <defaults>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <spinlocks>4095</spinlocks>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <stimer_direct>on</stimer_direct>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <tlbflush_direct>on</tlbflush_direct>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <tlbflush_extended>on</tlbflush_extended>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </defaults>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     </hyperv>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <launchSecurity supported='no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:   </features>
Jan 23 11:38:02 compute-0 nova_compute[185173]: </domainCapabilities>
Jan 23 11:38:02 compute-0 nova_compute[185173]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 23 11:38:02 compute-0 nova_compute[185173]: 2026-01-23 11:38:02.073 185177 DEBUG nova.virt.libvirt.host [None req-161a4fc3-746d-453a-ae54-40285a29550b - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Jan 23 11:38:02 compute-0 nova_compute[185173]: 2026-01-23 11:38:02.076 185177 DEBUG nova.virt.libvirt.host [None req-161a4fc3-746d-453a-ae54-40285a29550b - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Jan 23 11:38:02 compute-0 nova_compute[185173]: <domainCapabilities>
Jan 23 11:38:02 compute-0 nova_compute[185173]:   <path>/usr/libexec/qemu-kvm</path>
Jan 23 11:38:02 compute-0 nova_compute[185173]:   <domain>kvm</domain>
Jan 23 11:38:02 compute-0 nova_compute[185173]:   <machine>pc-i440fx-rhel7.6.0</machine>
Jan 23 11:38:02 compute-0 nova_compute[185173]:   <arch>x86_64</arch>
Jan 23 11:38:02 compute-0 nova_compute[185173]:   <vcpu max='240'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:   <iothreads supported='yes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:   <os supported='yes'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <enum name='firmware'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <loader supported='yes'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <enum name='type'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>rom</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>pflash</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </enum>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <enum name='readonly'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>yes</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>no</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </enum>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <enum name='secure'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>no</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </enum>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     </loader>
Jan 23 11:38:02 compute-0 nova_compute[185173]:   </os>
Jan 23 11:38:02 compute-0 nova_compute[185173]:   <cpu>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <mode name='host-passthrough' supported='yes'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <enum name='hostPassthroughMigratable'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>on</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>off</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </enum>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     </mode>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <mode name='maximum' supported='yes'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <enum name='maximumMigratable'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>on</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>off</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </enum>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     </mode>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <mode name='host-model' supported='yes'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <vendor>AMD</vendor>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <feature policy='require' name='x2apic'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <feature policy='require' name='tsc-deadline'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <feature policy='require' name='hypervisor'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <feature policy='require' name='tsc_adjust'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <feature policy='require' name='spec-ctrl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <feature policy='require' name='stibp'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <feature policy='require' name='ssbd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <feature policy='require' name='cmp_legacy'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <feature policy='require' name='overflow-recov'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <feature policy='require' name='succor'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <feature policy='require' name='ibrs'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <feature policy='require' name='amd-ssbd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <feature policy='require' name='virt-ssbd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <feature policy='require' name='lbrv'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <feature policy='require' name='tsc-scale'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <feature policy='require' name='vmcb-clean'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <feature policy='require' name='flushbyasid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <feature policy='require' name='pause-filter'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <feature policy='require' name='pfthreshold'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <feature policy='require' name='svme-addr-chk'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <feature policy='disable' name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     </mode>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <mode name='custom' supported='yes'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Broadwell'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='hle'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rtm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Broadwell-IBRS'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='hle'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rtm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Broadwell-noTSX'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Broadwell-v1'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='hle'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rtm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Broadwell-v2'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Broadwell-v3'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='hle'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rtm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Broadwell-v4'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Cascadelake-Server'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='hle'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rtm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ibrs-all'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Cascadelake-Server-v1'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='hle'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rtm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Cascadelake-Server-v2'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='hle'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ibrs-all'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rtm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Cascadelake-Server-v3'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ibrs-all'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Cascadelake-Server-v4'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ibrs-all'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Cascadelake-Server-v5'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ibrs-all'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='ClearwaterForest'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-ifma'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-ne-convert'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-vnni-int16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-vnni-int8'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='bhi-ctrl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='bhi-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='bus-lock-detect'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='cldemote'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='cmpccxadd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ddpd-u'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fbsdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrs'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='gfni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ibrs-all'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='intel-psfd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ipred-ctrl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='lam'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='mcdt-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='movdir64b'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='movdiri'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pbrsb-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='prefetchiti'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='psdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rrsba-ctrl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='sbdr-ssdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='serialize'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='sha512'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='sm3'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='sm4'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ss'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vaes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vpclmulqdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='ClearwaterForest-v1'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-ifma'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-ne-convert'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-vnni-int16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-vnni-int8'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='bhi-ctrl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='bhi-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='bus-lock-detect'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='cldemote'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='cmpccxadd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ddpd-u'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fbsdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrs'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='gfni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ibrs-all'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='intel-psfd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ipred-ctrl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='lam'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='mcdt-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='movdir64b'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='movdiri'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pbrsb-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='prefetchiti'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='psdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rrsba-ctrl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='sbdr-ssdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='serialize'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='sha512'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='sm3'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='sm4'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ss'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vaes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vpclmulqdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Cooperlake'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-bf16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='hle'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ibrs-all'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rtm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='taa-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Cooperlake-v1'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-bf16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='hle'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ibrs-all'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rtm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='taa-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Cooperlake-v2'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-bf16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='hle'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ibrs-all'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rtm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='taa-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Denverton'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='mpx'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Denverton-v1'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='mpx'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Denverton-v2'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Denverton-v3'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Dhyana-v2'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='EPYC-Genoa'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amd-psfd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='auto-ibrs'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-bf16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bitalg'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512ifma'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi2'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='gfni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='la57'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='no-nested-data-bp'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='null-sel-clr-base'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='stibp-always-on'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vaes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vpclmulqdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='EPYC-Genoa-v1'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amd-psfd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='auto-ibrs'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-bf16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bitalg'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512ifma'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi2'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='gfni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='la57'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='no-nested-data-bp'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='null-sel-clr-base'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='stibp-always-on'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vaes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vpclmulqdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='EPYC-Genoa-v2'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amd-psfd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='auto-ibrs'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-bf16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bitalg'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512ifma'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi2'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fs-gs-base-ns'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='gfni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='la57'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='no-nested-data-bp'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='null-sel-clr-base'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='perfmon-v2'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='stibp-always-on'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vaes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vpclmulqdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='EPYC-Milan'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='EPYC-Milan-v1'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='EPYC-Milan-v2'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amd-psfd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='no-nested-data-bp'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='null-sel-clr-base'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='stibp-always-on'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vaes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vpclmulqdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='EPYC-Milan-v3'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amd-psfd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='no-nested-data-bp'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='null-sel-clr-base'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='stibp-always-on'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vaes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vpclmulqdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='EPYC-Rome'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='EPYC-Rome-v1'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='EPYC-Rome-v2'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='EPYC-Rome-v3'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='EPYC-Turin'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amd-psfd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='auto-ibrs'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-bf16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-vp2intersect'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bitalg'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512ifma'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi2'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fs-gs-base-ns'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='gfni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ibpb-brtype'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='la57'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='movdir64b'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='movdiri'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='no-nested-data-bp'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='null-sel-clr-base'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='perfmon-v2'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='prefetchi'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='sbpb'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='srso-user-kernel-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='stibp-always-on'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vaes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vpclmulqdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='EPYC-Turin-v1'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amd-psfd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='auto-ibrs'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-bf16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-vp2intersect'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bitalg'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512ifma'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi2'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fs-gs-base-ns'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='gfni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ibpb-brtype'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='la57'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='movdir64b'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='movdiri'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='no-nested-data-bp'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='null-sel-clr-base'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='perfmon-v2'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='prefetchi'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='sbpb'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='srso-user-kernel-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='stibp-always-on'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vaes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vpclmulqdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='EPYC-v3'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='EPYC-v4'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='EPYC-v5'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='GraniteRapids'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amx-bf16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amx-fp16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amx-int8'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amx-tile'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-bf16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-fp16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bitalg'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512ifma'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi2'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='bus-lock-detect'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fbsdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrc'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrs'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fzrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='gfni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='hle'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ibrs-all'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='la57'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='mcdt-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pbrsb-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='prefetchiti'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='psdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rtm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='sbdr-ssdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='serialize'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='taa-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='tsx-ldtrk'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vaes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vpclmulqdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xfd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='GraniteRapids-v1'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amx-bf16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amx-fp16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amx-int8'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amx-tile'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-bf16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-fp16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bitalg'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512ifma'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi2'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='bus-lock-detect'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fbsdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrc'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrs'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fzrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='gfni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='hle'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ibrs-all'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='la57'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='mcdt-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pbrsb-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='prefetchiti'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='psdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rtm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='sbdr-ssdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='serialize'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='taa-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='tsx-ldtrk'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vaes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vpclmulqdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xfd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='GraniteRapids-v2'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amx-bf16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amx-fp16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amx-int8'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amx-tile'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx10'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx10-128'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx10-256'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx10-512'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-bf16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-fp16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bitalg'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512ifma'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi2'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='bus-lock-detect'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='cldemote'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fbsdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrc'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrs'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fzrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='gfni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='hle'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ibrs-all'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='la57'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='mcdt-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='movdir64b'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='movdiri'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pbrsb-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='prefetchiti'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='psdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rtm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='sbdr-ssdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='serialize'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ss'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='taa-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='tsx-ldtrk'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vaes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vpclmulqdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xfd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='GraniteRapids-v3'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amx-bf16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amx-fp16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amx-int8'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amx-tile'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx10'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx10-128'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx10-256'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx10-512'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-bf16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-fp16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bitalg'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512ifma'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi2'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='bus-lock-detect'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='cldemote'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fbsdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrc'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrs'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fzrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='gfni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='hle'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ibrs-all'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='la57'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='mcdt-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='movdir64b'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='movdiri'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pbrsb-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='prefetchiti'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='psdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rtm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='sbdr-ssdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='serialize'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ss'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='taa-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='tsx-ldtrk'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vaes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vpclmulqdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xfd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Haswell'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='hle'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rtm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Haswell-IBRS'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='hle'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rtm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Haswell-noTSX'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Haswell-v1'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='hle'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rtm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Haswell-v2'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Haswell-v3'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='hle'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rtm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Haswell-v4'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Icelake-Server'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bitalg'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi2'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='gfni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='hle'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='la57'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rtm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vaes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vpclmulqdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Icelake-Server-noTSX'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bitalg'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi2'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='gfni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='la57'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vaes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vpclmulqdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Icelake-Server-v1'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bitalg'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi2'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='gfni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='hle'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='la57'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rtm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vaes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vpclmulqdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Icelake-Server-v2'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bitalg'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi2'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='gfni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='la57'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vaes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vpclmulqdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Icelake-Server-v3'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bitalg'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi2'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='gfni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ibrs-all'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='la57'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='taa-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vaes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vpclmulqdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Icelake-Server-v4'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bitalg'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512ifma'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi2'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='gfni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ibrs-all'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='la57'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='taa-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vaes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vpclmulqdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Icelake-Server-v5'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bitalg'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512ifma'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi2'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='gfni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ibrs-all'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='la57'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='taa-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vaes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vpclmulqdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Icelake-Server-v6'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bitalg'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512ifma'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi2'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='gfni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ibrs-all'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='la57'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='taa-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vaes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vpclmulqdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Icelake-Server-v7'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bitalg'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512ifma'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi2'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='gfni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='hle'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ibrs-all'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='la57'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rtm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='taa-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vaes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vpclmulqdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='IvyBridge'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='IvyBridge-IBRS'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='IvyBridge-v1'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='IvyBridge-v2'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='KnightsMill'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-4fmaps'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-4vnniw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512er'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512pf'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ss'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='KnightsMill-v1'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-4fmaps'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-4vnniw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512er'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512pf'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ss'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Opteron_G4'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fma4'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xop'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Opteron_G4-v1'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fma4'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xop'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Opteron_G5'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fma4'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='tbm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xop'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Opteron_G5-v1'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fma4'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='tbm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xop'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='SapphireRapids'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amx-bf16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amx-int8'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amx-tile'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-bf16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-fp16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bitalg'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512ifma'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi2'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='bus-lock-detect'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrc'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrs'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fzrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='gfni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='hle'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ibrs-all'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='la57'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rtm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='serialize'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='taa-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='tsx-ldtrk'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vaes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vpclmulqdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xfd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='SapphireRapids-v1'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amx-bf16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amx-int8'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amx-tile'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-bf16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-fp16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bitalg'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512ifma'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi2'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='bus-lock-detect'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrc'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrs'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fzrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='gfni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='hle'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ibrs-all'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='la57'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rtm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='serialize'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='taa-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='tsx-ldtrk'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vaes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vpclmulqdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xfd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='SapphireRapids-v2'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amx-bf16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amx-int8'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amx-tile'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-bf16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-fp16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bitalg'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512ifma'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi2'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='bus-lock-detect'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fbsdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrc'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrs'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fzrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='gfni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='hle'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ibrs-all'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='la57'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='psdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rtm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='sbdr-ssdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='serialize'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='taa-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='tsx-ldtrk'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vaes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vpclmulqdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xfd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='SapphireRapids-v3'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amx-bf16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amx-int8'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amx-tile'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-bf16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-fp16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bitalg'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512ifma'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi2'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='bus-lock-detect'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='cldemote'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fbsdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrc'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrs'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fzrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='gfni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='hle'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ibrs-all'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='la57'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='movdir64b'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='movdiri'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='psdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rtm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='sbdr-ssdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='serialize'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ss'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='taa-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='tsx-ldtrk'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vaes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vpclmulqdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xfd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='SapphireRapids-v4'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amx-bf16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amx-int8'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amx-tile'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-bf16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-fp16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bitalg'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512ifma'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi2'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='bus-lock-detect'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='cldemote'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fbsdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrc'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrs'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fzrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='gfni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='hle'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ibrs-all'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='la57'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='movdir64b'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='movdiri'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='psdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rtm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='sbdr-ssdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='serialize'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ss'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='taa-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='tsx-ldtrk'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vaes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vpclmulqdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xfd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='SierraForest'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-ifma'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-ne-convert'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-vnni-int8'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='bus-lock-detect'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='cmpccxadd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fbsdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrs'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='gfni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ibrs-all'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='mcdt-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pbrsb-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='psdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='sbdr-ssdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='serialize'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vaes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vpclmulqdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='SierraForest-v1'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-ifma'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-ne-convert'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-vnni-int8'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='bus-lock-detect'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='cmpccxadd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fbsdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrs'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='gfni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ibrs-all'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='mcdt-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pbrsb-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='psdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='sbdr-ssdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='serialize'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vaes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vpclmulqdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='SierraForest-v2'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-ifma'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-ne-convert'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-vnni-int8'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='bhi-ctrl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='bus-lock-detect'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='cldemote'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='cmpccxadd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fbsdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrs'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='gfni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ibrs-all'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='intel-psfd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ipred-ctrl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='lam'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='mcdt-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='movdir64b'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='movdiri'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pbrsb-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='psdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rrsba-ctrl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='sbdr-ssdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='serialize'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ss'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vaes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vpclmulqdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='SierraForest-v3'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-ifma'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-ne-convert'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-vnni-int8'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='bhi-ctrl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='bus-lock-detect'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='cldemote'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='cmpccxadd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fbsdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrs'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='gfni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ibrs-all'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='intel-psfd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ipred-ctrl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='lam'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='mcdt-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='movdir64b'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='movdiri'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pbrsb-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='psdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rrsba-ctrl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='sbdr-ssdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='serialize'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ss'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vaes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vpclmulqdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Skylake-Client'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='hle'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rtm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Skylake-Client-IBRS'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='hle'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rtm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Skylake-Client-v1'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='hle'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rtm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Skylake-Client-v2'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='hle'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rtm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Skylake-Client-v3'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Skylake-Client-v4'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Skylake-Server'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='hle'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rtm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Skylake-Server-IBRS'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='hle'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rtm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Skylake-Server-v1'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='hle'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rtm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Skylake-Server-v2'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='hle'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rtm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Skylake-Server-v3'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Skylake-Server-v4'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Skylake-Server-v5'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Snowridge'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='cldemote'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='core-capability'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='gfni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='movdir64b'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='movdiri'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='mpx'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='split-lock-detect'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Snowridge-v1'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='cldemote'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='core-capability'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='gfni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='movdir64b'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='movdiri'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='mpx'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='split-lock-detect'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Snowridge-v2'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='cldemote'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='core-capability'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='gfni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='movdir64b'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='movdiri'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='split-lock-detect'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Snowridge-v3'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='cldemote'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='core-capability'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='gfni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='movdir64b'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='movdiri'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='split-lock-detect'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Snowridge-v4'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='cldemote'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='gfni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='movdir64b'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='movdiri'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='athlon'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='3dnow'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='3dnowext'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='athlon-v1'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='3dnow'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='3dnowext'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='core2duo'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ss'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='core2duo-v1'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ss'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='coreduo'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ss'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='coreduo-v1'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ss'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='n270'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ss'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='n270-v1'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ss'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='phenom'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='3dnow'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='3dnowext'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='phenom-v1'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='3dnow'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='3dnowext'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     </mode>
Jan 23 11:38:02 compute-0 nova_compute[185173]:   </cpu>
Jan 23 11:38:02 compute-0 nova_compute[185173]:   <memoryBacking supported='yes'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <enum name='sourceType'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <value>file</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <value>anonymous</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <value>memfd</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     </enum>
Jan 23 11:38:02 compute-0 nova_compute[185173]:   </memoryBacking>
Jan 23 11:38:02 compute-0 nova_compute[185173]:   <devices>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <disk supported='yes'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <enum name='diskDevice'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>disk</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>cdrom</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>floppy</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>lun</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </enum>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <enum name='bus'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>ide</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>fdc</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>scsi</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>virtio</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>usb</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>sata</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </enum>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <enum name='model'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>virtio</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>virtio-transitional</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>virtio-non-transitional</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </enum>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     </disk>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <graphics supported='yes'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <enum name='type'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>vnc</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>egl-headless</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>dbus</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </enum>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     </graphics>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <video supported='yes'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <enum name='modelType'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>vga</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>cirrus</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>virtio</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>none</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>bochs</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>ramfb</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </enum>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     </video>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <hostdev supported='yes'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <enum name='mode'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>subsystem</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </enum>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <enum name='startupPolicy'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>default</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>mandatory</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>requisite</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>optional</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </enum>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <enum name='subsysType'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>usb</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>pci</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>scsi</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </enum>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <enum name='capsType'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <enum name='pciBackend'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     </hostdev>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <rng supported='yes'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <enum name='model'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>virtio</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>virtio-transitional</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>virtio-non-transitional</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </enum>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <enum name='backendModel'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>random</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>egd</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>builtin</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </enum>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     </rng>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <filesystem supported='yes'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <enum name='driverType'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>path</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>handle</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>virtiofs</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </enum>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     </filesystem>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <tpm supported='yes'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <enum name='model'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>tpm-tis</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>tpm-crb</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </enum>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <enum name='backendModel'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>emulator</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>external</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </enum>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <enum name='backendVersion'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>2.0</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </enum>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     </tpm>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <redirdev supported='yes'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <enum name='bus'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>usb</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </enum>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     </redirdev>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <channel supported='yes'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <enum name='type'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>pty</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>unix</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </enum>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     </channel>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <crypto supported='yes'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <enum name='model'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <enum name='type'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>qemu</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </enum>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <enum name='backendModel'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>builtin</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </enum>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     </crypto>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <interface supported='yes'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <enum name='backendType'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>default</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>passt</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </enum>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     </interface>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <panic supported='yes'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <enum name='model'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>isa</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>hyperv</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </enum>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     </panic>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <console supported='yes'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <enum name='type'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>null</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>vc</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>pty</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>dev</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>file</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>pipe</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>stdio</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>udp</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>tcp</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>unix</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>qemu-vdagent</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>dbus</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </enum>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     </console>
Jan 23 11:38:02 compute-0 nova_compute[185173]:   </devices>
Jan 23 11:38:02 compute-0 nova_compute[185173]:   <features>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <gic supported='no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <vmcoreinfo supported='yes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <genid supported='yes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <backingStoreInput supported='yes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <backup supported='yes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <async-teardown supported='yes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <s390-pv supported='no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <ps2 supported='yes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <tdx supported='no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <sev supported='no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <sgx supported='no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <hyperv supported='yes'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <enum name='features'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>relaxed</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>vapic</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>spinlocks</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>vpindex</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>runtime</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>synic</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>stimer</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>reset</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>vendor_id</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>frequencies</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>reenlightenment</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>tlbflush</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>ipi</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>avic</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>emsr_bitmap</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>xmm_input</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </enum>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <defaults>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <spinlocks>4095</spinlocks>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <stimer_direct>on</stimer_direct>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <tlbflush_direct>on</tlbflush_direct>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <tlbflush_extended>on</tlbflush_extended>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </defaults>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     </hyperv>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <launchSecurity supported='no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:   </features>
Jan 23 11:38:02 compute-0 nova_compute[185173]: </domainCapabilities>
Jan 23 11:38:02 compute-0 nova_compute[185173]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 23 11:38:02 compute-0 nova_compute[185173]: 2026-01-23 11:38:02.157 185177 DEBUG nova.virt.libvirt.host [None req-161a4fc3-746d-453a-ae54-40285a29550b - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Jan 23 11:38:02 compute-0 nova_compute[185173]: <domainCapabilities>
Jan 23 11:38:02 compute-0 nova_compute[185173]:   <path>/usr/libexec/qemu-kvm</path>
Jan 23 11:38:02 compute-0 nova_compute[185173]:   <domain>kvm</domain>
Jan 23 11:38:02 compute-0 nova_compute[185173]:   <machine>pc-q35-rhel9.8.0</machine>
Jan 23 11:38:02 compute-0 nova_compute[185173]:   <arch>x86_64</arch>
Jan 23 11:38:02 compute-0 nova_compute[185173]:   <vcpu max='4096'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:   <iothreads supported='yes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:   <os supported='yes'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <enum name='firmware'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <value>efi</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     </enum>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <loader supported='yes'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <enum name='type'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>rom</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>pflash</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </enum>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <enum name='readonly'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>yes</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>no</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </enum>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <enum name='secure'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>yes</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>no</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </enum>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     </loader>
Jan 23 11:38:02 compute-0 nova_compute[185173]:   </os>
Jan 23 11:38:02 compute-0 nova_compute[185173]:   <cpu>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <mode name='host-passthrough' supported='yes'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <enum name='hostPassthroughMigratable'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>on</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>off</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </enum>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     </mode>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <mode name='maximum' supported='yes'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <enum name='maximumMigratable'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>on</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>off</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </enum>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     </mode>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <mode name='host-model' supported='yes'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <vendor>AMD</vendor>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <feature policy='require' name='x2apic'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <feature policy='require' name='tsc-deadline'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <feature policy='require' name='hypervisor'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <feature policy='require' name='tsc_adjust'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <feature policy='require' name='spec-ctrl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <feature policy='require' name='stibp'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <feature policy='require' name='ssbd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <feature policy='require' name='cmp_legacy'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <feature policy='require' name='overflow-recov'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <feature policy='require' name='succor'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <feature policy='require' name='ibrs'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <feature policy='require' name='amd-ssbd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <feature policy='require' name='virt-ssbd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <feature policy='require' name='lbrv'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <feature policy='require' name='tsc-scale'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <feature policy='require' name='vmcb-clean'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <feature policy='require' name='flushbyasid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <feature policy='require' name='pause-filter'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <feature policy='require' name='pfthreshold'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <feature policy='require' name='svme-addr-chk'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <feature policy='disable' name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     </mode>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <mode name='custom' supported='yes'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Broadwell'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='hle'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rtm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Broadwell-IBRS'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='hle'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rtm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Broadwell-noTSX'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Broadwell-v1'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='hle'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rtm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Broadwell-v2'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Broadwell-v3'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='hle'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rtm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Broadwell-v4'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Cascadelake-Server'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='hle'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rtm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ibrs-all'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Cascadelake-Server-v1'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='hle'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rtm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Cascadelake-Server-v2'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='hle'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ibrs-all'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rtm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Cascadelake-Server-v3'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ibrs-all'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Cascadelake-Server-v4'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ibrs-all'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Cascadelake-Server-v5'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ibrs-all'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='ClearwaterForest'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-ifma'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-ne-convert'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-vnni-int16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-vnni-int8'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='bhi-ctrl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='bhi-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='bus-lock-detect'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='cldemote'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='cmpccxadd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ddpd-u'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fbsdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrs'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='gfni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ibrs-all'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='intel-psfd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ipred-ctrl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='lam'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='mcdt-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='movdir64b'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='movdiri'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pbrsb-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='prefetchiti'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='psdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rrsba-ctrl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='sbdr-ssdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='serialize'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='sha512'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='sm3'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='sm4'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ss'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vaes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vpclmulqdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='ClearwaterForest-v1'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-ifma'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-ne-convert'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-vnni-int16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-vnni-int8'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='bhi-ctrl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='bhi-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='bus-lock-detect'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='cldemote'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='cmpccxadd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ddpd-u'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fbsdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrs'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='gfni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ibrs-all'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='intel-psfd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ipred-ctrl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='lam'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='mcdt-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='movdir64b'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='movdiri'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pbrsb-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='prefetchiti'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='psdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rrsba-ctrl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='sbdr-ssdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='serialize'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='sha512'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='sm3'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='sm4'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ss'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vaes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vpclmulqdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Cooperlake'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-bf16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='hle'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ibrs-all'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rtm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='taa-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Cooperlake-v1'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-bf16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='hle'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ibrs-all'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rtm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='taa-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Cooperlake-v2'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-bf16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='hle'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ibrs-all'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rtm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='taa-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Denverton'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='mpx'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Denverton-v1'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='mpx'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Denverton-v2'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Denverton-v3'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Dhyana-v2'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='EPYC-Genoa'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amd-psfd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='auto-ibrs'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-bf16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bitalg'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512ifma'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi2'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='gfni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='la57'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='no-nested-data-bp'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='null-sel-clr-base'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='stibp-always-on'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vaes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vpclmulqdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='EPYC-Genoa-v1'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amd-psfd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='auto-ibrs'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-bf16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bitalg'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512ifma'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi2'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='gfni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='la57'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='no-nested-data-bp'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='null-sel-clr-base'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='stibp-always-on'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vaes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vpclmulqdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='EPYC-Genoa-v2'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amd-psfd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='auto-ibrs'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-bf16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bitalg'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512ifma'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi2'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fs-gs-base-ns'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='gfni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='la57'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='no-nested-data-bp'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='null-sel-clr-base'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='perfmon-v2'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='stibp-always-on'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vaes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vpclmulqdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='EPYC-Milan'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='EPYC-Milan-v1'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='EPYC-Milan-v2'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amd-psfd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='no-nested-data-bp'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='null-sel-clr-base'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='stibp-always-on'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vaes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vpclmulqdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='EPYC-Milan-v3'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amd-psfd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='no-nested-data-bp'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='null-sel-clr-base'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='stibp-always-on'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vaes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vpclmulqdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='EPYC-Rome'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='EPYC-Rome-v1'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='EPYC-Rome-v2'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='EPYC-Rome-v3'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='EPYC-Turin'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amd-psfd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='auto-ibrs'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-bf16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-vp2intersect'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bitalg'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512ifma'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi2'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fs-gs-base-ns'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='gfni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ibpb-brtype'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='la57'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='movdir64b'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='movdiri'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='no-nested-data-bp'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='null-sel-clr-base'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='perfmon-v2'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='prefetchi'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='sbpb'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='srso-user-kernel-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='stibp-always-on'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vaes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vpclmulqdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='EPYC-Turin-v1'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amd-psfd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='auto-ibrs'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-bf16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-vp2intersect'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bitalg'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512ifma'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi2'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fs-gs-base-ns'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='gfni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ibpb-brtype'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='la57'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='movdir64b'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='movdiri'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='no-nested-data-bp'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='null-sel-clr-base'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='perfmon-v2'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='prefetchi'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='sbpb'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='srso-user-kernel-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='stibp-always-on'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vaes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vpclmulqdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='EPYC-v3'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='EPYC-v4'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='EPYC-v5'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='GraniteRapids'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amx-bf16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amx-fp16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amx-int8'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amx-tile'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-bf16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-fp16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bitalg'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512ifma'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi2'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='bus-lock-detect'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fbsdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrc'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrs'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fzrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='gfni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='hle'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ibrs-all'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='la57'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='mcdt-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pbrsb-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='prefetchiti'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='psdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rtm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='sbdr-ssdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='serialize'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='taa-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='tsx-ldtrk'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vaes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vpclmulqdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xfd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='GraniteRapids-v1'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amx-bf16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amx-fp16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amx-int8'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amx-tile'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-bf16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-fp16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bitalg'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512ifma'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi2'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='bus-lock-detect'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fbsdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrc'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrs'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fzrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='gfni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='hle'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ibrs-all'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='la57'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='mcdt-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pbrsb-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='prefetchiti'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='psdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rtm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='sbdr-ssdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='serialize'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='taa-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='tsx-ldtrk'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vaes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vpclmulqdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xfd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='GraniteRapids-v2'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amx-bf16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amx-fp16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amx-int8'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amx-tile'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx10'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx10-128'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx10-256'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx10-512'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-bf16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-fp16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bitalg'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512ifma'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi2'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='bus-lock-detect'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='cldemote'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fbsdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrc'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrs'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fzrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='gfni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='hle'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ibrs-all'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='la57'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='mcdt-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='movdir64b'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='movdiri'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pbrsb-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='prefetchiti'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='psdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rtm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='sbdr-ssdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='serialize'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ss'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='taa-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='tsx-ldtrk'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vaes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vpclmulqdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xfd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='GraniteRapids-v3'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amx-bf16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amx-fp16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amx-int8'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amx-tile'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx10'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx10-128'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx10-256'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx10-512'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-bf16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-fp16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bitalg'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512ifma'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi2'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='bus-lock-detect'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='cldemote'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fbsdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrc'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrs'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fzrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='gfni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='hle'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ibrs-all'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='la57'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='mcdt-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='movdir64b'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='movdiri'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pbrsb-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='prefetchiti'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='psdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rtm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='sbdr-ssdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='serialize'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ss'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='taa-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='tsx-ldtrk'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vaes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vpclmulqdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xfd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Haswell'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='hle'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rtm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Haswell-IBRS'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='hle'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rtm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Haswell-noTSX'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Haswell-v1'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='hle'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rtm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Haswell-v2'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Haswell-v3'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='hle'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rtm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Haswell-v4'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Icelake-Server'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bitalg'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi2'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='gfni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='hle'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='la57'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rtm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vaes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vpclmulqdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Icelake-Server-noTSX'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bitalg'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi2'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='gfni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='la57'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vaes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vpclmulqdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Icelake-Server-v1'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bitalg'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi2'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='gfni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='hle'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='la57'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rtm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vaes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vpclmulqdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Icelake-Server-v2'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bitalg'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi2'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='gfni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='la57'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vaes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vpclmulqdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Icelake-Server-v3'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bitalg'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi2'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='gfni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ibrs-all'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='la57'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='taa-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vaes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vpclmulqdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Icelake-Server-v4'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bitalg'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512ifma'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi2'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='gfni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ibrs-all'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='la57'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='taa-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vaes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vpclmulqdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Icelake-Server-v5'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bitalg'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512ifma'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi2'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='gfni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ibrs-all'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='la57'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='taa-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vaes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vpclmulqdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Icelake-Server-v6'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bitalg'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512ifma'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi2'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='gfni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ibrs-all'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='la57'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='taa-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vaes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vpclmulqdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Icelake-Server-v7'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bitalg'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512ifma'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi2'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='gfni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='hle'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ibrs-all'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='la57'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rtm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='taa-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vaes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vpclmulqdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='IvyBridge'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='IvyBridge-IBRS'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='IvyBridge-v1'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='IvyBridge-v2'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='KnightsMill'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-4fmaps'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-4vnniw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512er'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512pf'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ss'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='KnightsMill-v1'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-4fmaps'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-4vnniw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512er'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512pf'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ss'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Opteron_G4'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fma4'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xop'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Opteron_G4-v1'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fma4'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xop'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Opteron_G5'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fma4'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='tbm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xop'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Opteron_G5-v1'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fma4'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='tbm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xop'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='SapphireRapids'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amx-bf16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amx-int8'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amx-tile'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-bf16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-fp16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bitalg'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512ifma'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi2'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='bus-lock-detect'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrc'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrs'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fzrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='gfni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='hle'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ibrs-all'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='la57'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rtm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='serialize'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='taa-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='tsx-ldtrk'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vaes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vpclmulqdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xfd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='SapphireRapids-v1'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amx-bf16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amx-int8'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amx-tile'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-bf16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-fp16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bitalg'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512ifma'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi2'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='bus-lock-detect'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrc'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrs'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fzrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='gfni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='hle'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ibrs-all'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='la57'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rtm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='serialize'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='taa-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='tsx-ldtrk'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vaes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vpclmulqdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xfd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='SapphireRapids-v2'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amx-bf16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amx-int8'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amx-tile'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-bf16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-fp16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bitalg'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512ifma'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi2'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='bus-lock-detect'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fbsdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrc'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrs'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fzrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='gfni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='hle'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ibrs-all'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='la57'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='psdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rtm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='sbdr-ssdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='serialize'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='taa-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='tsx-ldtrk'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vaes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vpclmulqdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xfd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='SapphireRapids-v3'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amx-bf16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amx-int8'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amx-tile'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-bf16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-fp16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bitalg'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512ifma'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi2'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='bus-lock-detect'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='cldemote'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fbsdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrc'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrs'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fzrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='gfni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='hle'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ibrs-all'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='la57'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='movdir64b'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='movdiri'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='psdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rtm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='sbdr-ssdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='serialize'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ss'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='taa-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='tsx-ldtrk'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vaes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vpclmulqdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xfd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='SapphireRapids-v4'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amx-bf16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amx-int8'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='amx-tile'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-bf16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-fp16'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512-vpopcntdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bitalg'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512ifma'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vbmi2'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='bus-lock-detect'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='cldemote'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fbsdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrc'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrs'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fzrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='gfni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='hle'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ibrs-all'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='la57'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='movdir64b'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='movdiri'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='psdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rtm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='sbdr-ssdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='serialize'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ss'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='taa-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='tsx-ldtrk'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vaes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vpclmulqdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xfd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='SierraForest'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-ifma'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-ne-convert'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-vnni-int8'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='bus-lock-detect'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='cmpccxadd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fbsdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrs'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='gfni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ibrs-all'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='mcdt-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pbrsb-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='psdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='sbdr-ssdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='serialize'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vaes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vpclmulqdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='SierraForest-v1'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-ifma'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-ne-convert'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-vnni-int8'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='bus-lock-detect'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='cmpccxadd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fbsdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrs'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='gfni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ibrs-all'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='mcdt-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pbrsb-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='psdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='sbdr-ssdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='serialize'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vaes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vpclmulqdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='SierraForest-v2'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-ifma'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-ne-convert'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-vnni-int8'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='bhi-ctrl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='bus-lock-detect'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='cldemote'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='cmpccxadd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fbsdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrs'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='gfni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ibrs-all'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='intel-psfd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ipred-ctrl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='lam'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='mcdt-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='movdir64b'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='movdiri'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pbrsb-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='psdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rrsba-ctrl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='sbdr-ssdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='serialize'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ss'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vaes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vpclmulqdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='SierraForest-v3'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-ifma'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-ne-convert'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-vnni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx-vnni-int8'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='bhi-ctrl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='bus-lock-detect'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='cldemote'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='cmpccxadd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fbsdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='fsrs'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='gfni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ibrs-all'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='intel-psfd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ipred-ctrl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='lam'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='mcdt-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='movdir64b'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='movdiri'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pbrsb-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='psdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rrsba-ctrl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='sbdr-ssdp-no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='serialize'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ss'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vaes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='vpclmulqdq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Skylake-Client'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='hle'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rtm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Skylake-Client-IBRS'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='hle'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rtm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Skylake-Client-v1'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='hle'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rtm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Skylake-Client-v2'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='hle'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rtm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Skylake-Client-v3'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Skylake-Client-v4'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Skylake-Server'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='hle'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rtm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Skylake-Server-IBRS'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='hle'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rtm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Skylake-Server-v1'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='hle'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rtm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Skylake-Server-v2'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='hle'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='rtm'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Skylake-Server-v3'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Skylake-Server-v4'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Skylake-Server-v5'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512bw'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512cd'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512dq'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512f'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='avx512vl'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='invpcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pcid'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='pku'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Snowridge'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='cldemote'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='core-capability'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='gfni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='movdir64b'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='movdiri'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='mpx'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='split-lock-detect'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Snowridge-v1'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='cldemote'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='core-capability'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='gfni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='movdir64b'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='movdiri'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='mpx'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='split-lock-detect'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Snowridge-v2'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='cldemote'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='core-capability'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='gfni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='movdir64b'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='movdiri'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='split-lock-detect'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Snowridge-v3'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='cldemote'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='core-capability'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='gfni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='movdir64b'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='movdiri'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='split-lock-detect'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='Snowridge-v4'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='cldemote'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='erms'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='gfni'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='movdir64b'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='movdiri'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='xsaves'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='athlon'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='3dnow'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='3dnowext'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='athlon-v1'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='3dnow'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='3dnowext'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='core2duo'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ss'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='core2duo-v1'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ss'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='coreduo'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ss'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='coreduo-v1'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ss'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='n270'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ss'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='n270-v1'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='ss'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='phenom'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='3dnow'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='3dnowext'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <blockers model='phenom-v1'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='3dnow'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <feature name='3dnowext'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </blockers>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     </mode>
Jan 23 11:38:02 compute-0 nova_compute[185173]:   </cpu>
Jan 23 11:38:02 compute-0 nova_compute[185173]:   <memoryBacking supported='yes'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <enum name='sourceType'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <value>file</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <value>anonymous</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <value>memfd</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     </enum>
Jan 23 11:38:02 compute-0 nova_compute[185173]:   </memoryBacking>
Jan 23 11:38:02 compute-0 nova_compute[185173]:   <devices>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <disk supported='yes'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <enum name='diskDevice'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>disk</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>cdrom</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>floppy</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>lun</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </enum>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <enum name='bus'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>fdc</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>scsi</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>virtio</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>usb</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>sata</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </enum>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <enum name='model'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>virtio</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>virtio-transitional</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>virtio-non-transitional</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </enum>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     </disk>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <graphics supported='yes'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <enum name='type'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>vnc</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>egl-headless</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>dbus</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </enum>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     </graphics>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <video supported='yes'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <enum name='modelType'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>vga</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>cirrus</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>virtio</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>none</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>bochs</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>ramfb</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </enum>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     </video>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <hostdev supported='yes'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <enum name='mode'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>subsystem</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </enum>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <enum name='startupPolicy'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>default</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>mandatory</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>requisite</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>optional</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </enum>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <enum name='subsysType'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>usb</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>pci</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>scsi</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </enum>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <enum name='capsType'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <enum name='pciBackend'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     </hostdev>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <rng supported='yes'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <enum name='model'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>virtio</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>virtio-transitional</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>virtio-non-transitional</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </enum>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <enum name='backendModel'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>random</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>egd</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>builtin</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </enum>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     </rng>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <filesystem supported='yes'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <enum name='driverType'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>path</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>handle</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>virtiofs</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </enum>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     </filesystem>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <tpm supported='yes'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <enum name='model'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>tpm-tis</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>tpm-crb</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </enum>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <enum name='backendModel'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>emulator</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>external</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </enum>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <enum name='backendVersion'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>2.0</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </enum>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     </tpm>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <redirdev supported='yes'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <enum name='bus'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>usb</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </enum>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     </redirdev>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <channel supported='yes'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <enum name='type'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>pty</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>unix</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </enum>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     </channel>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <crypto supported='yes'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <enum name='model'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <enum name='type'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>qemu</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </enum>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <enum name='backendModel'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>builtin</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </enum>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     </crypto>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <interface supported='yes'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <enum name='backendType'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>default</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>passt</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </enum>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     </interface>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <panic supported='yes'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <enum name='model'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>isa</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>hyperv</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </enum>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     </panic>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <console supported='yes'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <enum name='type'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>null</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>vc</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>pty</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>dev</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>file</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>pipe</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>stdio</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>udp</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>tcp</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>unix</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>qemu-vdagent</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>dbus</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </enum>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     </console>
Jan 23 11:38:02 compute-0 nova_compute[185173]:   </devices>
Jan 23 11:38:02 compute-0 nova_compute[185173]:   <features>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <gic supported='no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <vmcoreinfo supported='yes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <genid supported='yes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <backingStoreInput supported='yes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <backup supported='yes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <async-teardown supported='yes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <s390-pv supported='no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <ps2 supported='yes'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <tdx supported='no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <sev supported='no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <sgx supported='no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <hyperv supported='yes'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <enum name='features'>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>relaxed</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>vapic</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>spinlocks</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>vpindex</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>runtime</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>synic</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>stimer</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>reset</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>vendor_id</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>frequencies</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>reenlightenment</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>tlbflush</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>ipi</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>avic</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>emsr_bitmap</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <value>xmm_input</value>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </enum>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       <defaults>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <spinlocks>4095</spinlocks>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <stimer_direct>on</stimer_direct>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <tlbflush_direct>on</tlbflush_direct>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <tlbflush_extended>on</tlbflush_extended>
Jan 23 11:38:02 compute-0 nova_compute[185173]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 23 11:38:02 compute-0 nova_compute[185173]:       </defaults>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     </hyperv>
Jan 23 11:38:02 compute-0 nova_compute[185173]:     <launchSecurity supported='no'/>
Jan 23 11:38:02 compute-0 nova_compute[185173]:   </features>
Jan 23 11:38:02 compute-0 nova_compute[185173]: </domainCapabilities>
Jan 23 11:38:02 compute-0 nova_compute[185173]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 23 11:38:02 compute-0 nova_compute[185173]: 2026-01-23 11:38:02.232 185177 DEBUG nova.virt.libvirt.host [None req-161a4fc3-746d-453a-ae54-40285a29550b - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Jan 23 11:38:02 compute-0 nova_compute[185173]: 2026-01-23 11:38:02.232 185177 DEBUG nova.virt.libvirt.host [None req-161a4fc3-746d-453a-ae54-40285a29550b - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Jan 23 11:38:02 compute-0 nova_compute[185173]: 2026-01-23 11:38:02.232 185177 DEBUG nova.virt.libvirt.host [None req-161a4fc3-746d-453a-ae54-40285a29550b - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Jan 23 11:38:02 compute-0 nova_compute[185173]: 2026-01-23 11:38:02.238 185177 INFO nova.virt.libvirt.host [None req-161a4fc3-746d-453a-ae54-40285a29550b - - - - - -] Secure Boot support detected
Jan 23 11:38:02 compute-0 nova_compute[185173]: 2026-01-23 11:38:02.240 185177 INFO nova.virt.libvirt.driver [None req-161a4fc3-746d-453a-ae54-40285a29550b - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Jan 23 11:38:02 compute-0 nova_compute[185173]: 2026-01-23 11:38:02.240 185177 INFO nova.virt.libvirt.driver [None req-161a4fc3-746d-453a-ae54-40285a29550b - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Jan 23 11:38:02 compute-0 nova_compute[185173]: 2026-01-23 11:38:02.247 185177 DEBUG nova.virt.libvirt.driver [None req-161a4fc3-746d-453a-ae54-40285a29550b - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Jan 23 11:38:02 compute-0 nova_compute[185173]: 2026-01-23 11:38:02.264 185177 INFO nova.virt.node [None req-161a4fc3-746d-453a-ae54-40285a29550b - - - - - -] Determined node identity 77dd020c-2f5c-40b0-b660-8a95a28aabbd from /var/lib/nova/compute_id
Jan 23 11:38:02 compute-0 nova_compute[185173]: 2026-01-23 11:38:02.281 185177 WARNING nova.compute.manager [None req-161a4fc3-746d-453a-ae54-40285a29550b - - - - - -] Compute nodes ['77dd020c-2f5c-40b0-b660-8a95a28aabbd'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.
Jan 23 11:38:02 compute-0 nova_compute[185173]: 2026-01-23 11:38:02.303 185177 INFO nova.compute.manager [None req-161a4fc3-746d-453a-ae54-40285a29550b - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Jan 23 11:38:02 compute-0 nova_compute[185173]: 2026-01-23 11:38:02.346 185177 WARNING nova.compute.manager [None req-161a4fc3-746d-453a-ae54-40285a29550b - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Jan 23 11:38:02 compute-0 nova_compute[185173]: 2026-01-23 11:38:02.347 185177 DEBUG oslo_concurrency.lockutils [None req-161a4fc3-746d-453a-ae54-40285a29550b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 11:38:02 compute-0 nova_compute[185173]: 2026-01-23 11:38:02.347 185177 DEBUG oslo_concurrency.lockutils [None req-161a4fc3-746d-453a-ae54-40285a29550b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 11:38:02 compute-0 nova_compute[185173]: 2026-01-23 11:38:02.348 185177 DEBUG oslo_concurrency.lockutils [None req-161a4fc3-746d-453a-ae54-40285a29550b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 11:38:02 compute-0 nova_compute[185173]: 2026-01-23 11:38:02.348 185177 DEBUG nova.compute.resource_tracker [None req-161a4fc3-746d-453a-ae54-40285a29550b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 23 11:38:02 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Jan 23 11:38:02 compute-0 systemd[1]: Started libvirt nodedev daemon.
Jan 23 11:38:02 compute-0 nova_compute[185173]: 2026-01-23 11:38:02.607 185177 WARNING nova.virt.libvirt.driver [None req-161a4fc3-746d-453a-ae54-40285a29550b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 23 11:38:02 compute-0 nova_compute[185173]: 2026-01-23 11:38:02.608 185177 DEBUG nova.compute.resource_tracker [None req-161a4fc3-746d-453a-ae54-40285a29550b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=6019MB free_disk=72.64818572998047GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 23 11:38:02 compute-0 nova_compute[185173]: 2026-01-23 11:38:02.608 185177 DEBUG oslo_concurrency.lockutils [None req-161a4fc3-746d-453a-ae54-40285a29550b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 11:38:02 compute-0 nova_compute[185173]: 2026-01-23 11:38:02.608 185177 DEBUG oslo_concurrency.lockutils [None req-161a4fc3-746d-453a-ae54-40285a29550b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 11:38:02 compute-0 nova_compute[185173]: 2026-01-23 11:38:02.634 185177 WARNING nova.compute.resource_tracker [None req-161a4fc3-746d-453a-ae54-40285a29550b - - - - - -] No compute node record for compute-0.ctlplane.example.com:77dd020c-2f5c-40b0-b660-8a95a28aabbd: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host 77dd020c-2f5c-40b0-b660-8a95a28aabbd could not be found.
Jan 23 11:38:02 compute-0 rsyslogd[1006]: imjournal from <np0005593388:nova_compute>: begin to drop messages due to rate-limiting
Jan 23 11:38:02 compute-0 nova_compute[185173]: 2026-01-23 11:38:02.662 185177 INFO nova.compute.resource_tracker [None req-161a4fc3-746d-453a-ae54-40285a29550b - - - - - -] Compute node record created for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com with uuid: 77dd020c-2f5c-40b0-b660-8a95a28aabbd
Jan 23 11:38:02 compute-0 nova_compute[185173]: 2026-01-23 11:38:02.728 185177 DEBUG nova.compute.resource_tracker [None req-161a4fc3-746d-453a-ae54-40285a29550b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 23 11:38:02 compute-0 nova_compute[185173]: 2026-01-23 11:38:02.728 185177 DEBUG nova.compute.resource_tracker [None req-161a4fc3-746d-453a-ae54-40285a29550b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 23 11:38:03 compute-0 nova_compute[185173]: 2026-01-23 11:38:03.700 185177 INFO nova.scheduler.client.report [None req-161a4fc3-746d-453a-ae54-40285a29550b - - - - - -] [req-3c40b9ce-d024-4567-91ab-416c428d9a66] Created resource provider record via placement API for resource provider with UUID 77dd020c-2f5c-40b0-b660-8a95a28aabbd and name compute-0.ctlplane.example.com.
Jan 23 11:38:03 compute-0 podman[185491]: 2026-01-23 11:38:03.744189927 +0000 UTC m=+0.078730102 container health_status d96827cd9c29e53bbdf4cef10942608e4ba405294733072b4aa624c0238e2ed8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202)
Jan 23 11:38:04 compute-0 nova_compute[185173]: 2026-01-23 11:38:04.092 185177 DEBUG nova.virt.libvirt.host [None req-161a4fc3-746d-453a-ae54-40285a29550b - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Jan 23 11:38:04 compute-0 nova_compute[185173]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803
Jan 23 11:38:04 compute-0 nova_compute[185173]: 2026-01-23 11:38:04.092 185177 INFO nova.virt.libvirt.host [None req-161a4fc3-746d-453a-ae54-40285a29550b - - - - - -] kernel doesn't support AMD SEV
Jan 23 11:38:04 compute-0 nova_compute[185173]: 2026-01-23 11:38:04.092 185177 DEBUG nova.compute.provider_tree [None req-161a4fc3-746d-453a-ae54-40285a29550b - - - - - -] Updating inventory in ProviderTree for provider 77dd020c-2f5c-40b0-b660-8a95a28aabbd with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 23 11:38:04 compute-0 nova_compute[185173]: 2026-01-23 11:38:04.093 185177 DEBUG nova.virt.libvirt.driver [None req-161a4fc3-746d-453a-ae54-40285a29550b - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 23 11:38:04 compute-0 nova_compute[185173]: 2026-01-23 11:38:04.137 185177 DEBUG nova.scheduler.client.report [None req-161a4fc3-746d-453a-ae54-40285a29550b - - - - - -] Updated inventory for provider 77dd020c-2f5c-40b0-b660-8a95a28aabbd with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Jan 23 11:38:04 compute-0 nova_compute[185173]: 2026-01-23 11:38:04.138 185177 DEBUG nova.compute.provider_tree [None req-161a4fc3-746d-453a-ae54-40285a29550b - - - - - -] Updating resource provider 77dd020c-2f5c-40b0-b660-8a95a28aabbd generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Jan 23 11:38:04 compute-0 nova_compute[185173]: 2026-01-23 11:38:04.138 185177 DEBUG nova.compute.provider_tree [None req-161a4fc3-746d-453a-ae54-40285a29550b - - - - - -] Updating inventory in ProviderTree for provider 77dd020c-2f5c-40b0-b660-8a95a28aabbd with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 23 11:38:04 compute-0 nova_compute[185173]: 2026-01-23 11:38:04.274 185177 DEBUG nova.compute.provider_tree [None req-161a4fc3-746d-453a-ae54-40285a29550b - - - - - -] Updating resource provider 77dd020c-2f5c-40b0-b660-8a95a28aabbd generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Jan 23 11:38:04 compute-0 nova_compute[185173]: 2026-01-23 11:38:04.299 185177 DEBUG nova.compute.resource_tracker [None req-161a4fc3-746d-453a-ae54-40285a29550b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 23 11:38:04 compute-0 nova_compute[185173]: 2026-01-23 11:38:04.300 185177 DEBUG oslo_concurrency.lockutils [None req-161a4fc3-746d-453a-ae54-40285a29550b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.691s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 11:38:04 compute-0 nova_compute[185173]: 2026-01-23 11:38:04.300 185177 DEBUG nova.service [None req-161a4fc3-746d-453a-ae54-40285a29550b - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182
Jan 23 11:38:04 compute-0 nova_compute[185173]: 2026-01-23 11:38:04.392 185177 DEBUG nova.service [None req-161a4fc3-746d-453a-ae54-40285a29550b - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199
Jan 23 11:38:04 compute-0 nova_compute[185173]: 2026-01-23 11:38:04.392 185177 DEBUG nova.servicegroup.drivers.db [None req-161a4fc3-746d-453a-ae54-40285a29550b - - - - - -] DB_Driver: join new ServiceGroup member compute-0.ctlplane.example.com to the compute group, service = <Service: host=compute-0.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44
Jan 23 11:38:05 compute-0 podman[185510]: 2026-01-23 11:38:05.745181953 +0000 UTC m=+0.078746102 container health_status 1cc877fed4914980324cf4c0d6ba23743fd113442cee4d49cc1a59e402757170 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 23 11:38:06 compute-0 sshd-session[185536]: Accepted publickey for zuul from 192.168.122.30 port 32792 ssh2: ECDSA SHA256:AUEDGm/wgPOySUg5KweIs4KJvJDZMkuE7T7y2BxO92Y
Jan 23 11:38:06 compute-0 systemd-logind[798]: New session 26 of user zuul.
Jan 23 11:38:07 compute-0 systemd[1]: Started Session 26 of User zuul.
Jan 23 11:38:07 compute-0 sshd-session[185536]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 23 11:38:08 compute-0 python3.9[185689]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 11:38:08 compute-0 nova_compute[185173]: 2026-01-23 11:38:08.394 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:38:08 compute-0 nova_compute[185173]: 2026-01-23 11:38:08.419 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:38:09 compute-0 sudo[185843]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ngdjcjvvmjpsopbopvisxtvvnmiewwzh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168288.8040006-31-154051758237187/AnsiballZ_systemd_service.py'
Jan 23 11:38:09 compute-0 sudo[185843]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:38:09 compute-0 python3.9[185845]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 23 11:38:09 compute-0 systemd[1]: Reloading.
Jan 23 11:38:09 compute-0 systemd-rc-local-generator[185873]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 11:38:09 compute-0 systemd-sysv-generator[185877]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 11:38:10 compute-0 sudo[185843]: pam_unix(sudo:session): session closed for user root
Jan 23 11:38:10 compute-0 python3.9[186030]: ansible-ansible.builtin.service_facts Invoked
Jan 23 11:38:10 compute-0 network[186047]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 23 11:38:10 compute-0 network[186048]: 'network-scripts' will be removed from distribution in near future.
Jan 23 11:38:10 compute-0 network[186049]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 23 11:38:15 compute-0 sudo[186319]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-daiqvrwmrzqzcxgbtrwvbbvqjkxbcnhd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168295.0214224-50-182488495816914/AnsiballZ_systemd_service.py'
Jan 23 11:38:15 compute-0 sudo[186319]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:38:15 compute-0 python3.9[186321]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ceilometer_agent_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 11:38:15 compute-0 sudo[186319]: pam_unix(sudo:session): session closed for user root
Jan 23 11:38:16 compute-0 sudo[186472]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bktwaxauezwayepbszrzhkjdshpcwofy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168295.9581177-60-183752711353263/AnsiballZ_file.py'
Jan 23 11:38:16 compute-0 sudo[186472]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:38:16 compute-0 python3.9[186474]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:38:16 compute-0 sudo[186472]: pam_unix(sudo:session): session closed for user root
Jan 23 11:38:16 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 23 11:38:16 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 23 11:38:17 compute-0 sudo[186625]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rzobtsajvkcxltujqiuugypafsthpmid ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168296.814639-68-6783495351295/AnsiballZ_file.py'
Jan 23 11:38:17 compute-0 sudo[186625]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:38:17 compute-0 python3.9[186627]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:38:17 compute-0 sudo[186625]: pam_unix(sudo:session): session closed for user root
Jan 23 11:38:17 compute-0 sudo[186777]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-resxnfzhjpbxwsupyzhruguvjvcohhef ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168297.5758088-77-244990703749351/AnsiballZ_command.py'
Jan 23 11:38:17 compute-0 sudo[186777]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:38:18 compute-0 python3.9[186779]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 11:38:18 compute-0 sudo[186777]: pam_unix(sudo:session): session closed for user root
Jan 23 11:38:18 compute-0 python3.9[186931]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 23 11:38:19 compute-0 sudo[187081]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gvbrhbgxhtcgjkiszjhuptbtbvrdodbs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168299.2158568-95-45051966598555/AnsiballZ_systemd_service.py'
Jan 23 11:38:19 compute-0 sudo[187081]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:38:19 compute-0 python3.9[187083]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 23 11:38:19 compute-0 systemd[1]: Reloading.
Jan 23 11:38:19 compute-0 systemd-rc-local-generator[187109]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 11:38:19 compute-0 systemd-sysv-generator[187112]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 11:38:20 compute-0 sudo[187081]: pam_unix(sudo:session): session closed for user root
Jan 23 11:38:20 compute-0 sudo[187268]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kvcunhslxmptifvlfuitdgaiwlobnojj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168300.2787983-103-250863957819562/AnsiballZ_command.py'
Jan 23 11:38:20 compute-0 sudo[187268]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:38:20 compute-0 python3.9[187270]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ceilometer_agent_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 11:38:20 compute-0 sudo[187268]: pam_unix(sudo:session): session closed for user root
Jan 23 11:38:21 compute-0 sudo[187421]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ujrlzmvmkgswelnbcxqpestwmmwthlbo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168301.0943794-112-164266197119343/AnsiballZ_file.py'
Jan 23 11:38:21 compute-0 sudo[187421]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:38:21 compute-0 python3.9[187423]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/openstack/telemetry recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 11:38:21 compute-0 sudo[187421]: pam_unix(sudo:session): session closed for user root
Jan 23 11:38:22 compute-0 python3.9[187573]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 11:38:22 compute-0 sudo[187725]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhyxccihsobwrjvvbeekibmjasuuiawq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168302.5379376-128-137370924394207/AnsiballZ_group.py'
Jan 23 11:38:22 compute-0 sudo[187725]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:38:23 compute-0 python3.9[187727]: ansible-ansible.builtin.group Invoked with name=libvirt state=present force=False system=False local=False non_unique=False gid=None gid_min=None gid_max=None
Jan 23 11:38:23 compute-0 sudo[187725]: pam_unix(sudo:session): session closed for user root
Jan 23 11:38:23 compute-0 sudo[187877]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hlnwgfcfxkopjhmymhnazvwuevykkqvq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168303.421677-139-151539672257683/AnsiballZ_getent.py'
Jan 23 11:38:23 compute-0 sudo[187877]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:38:24 compute-0 python3.9[187879]: ansible-ansible.builtin.getent Invoked with database=passwd key=ceilometer fail_key=True service=None split=None
Jan 23 11:38:24 compute-0 sudo[187877]: pam_unix(sudo:session): session closed for user root
Jan 23 11:38:24 compute-0 sudo[188030]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cfazqbaoavoszazhqdbxfnqacuupxebp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168304.2438524-147-98352414674936/AnsiballZ_group.py'
Jan 23 11:38:24 compute-0 sudo[188030]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:38:24 compute-0 python3.9[188032]: ansible-ansible.builtin.group Invoked with gid=42405 name=ceilometer state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 23 11:38:24 compute-0 groupadd[188033]: group added to /etc/group: name=ceilometer, GID=42405
Jan 23 11:38:24 compute-0 groupadd[188033]: group added to /etc/gshadow: name=ceilometer
Jan 23 11:38:24 compute-0 groupadd[188033]: new group: name=ceilometer, GID=42405
Jan 23 11:38:24 compute-0 sudo[188030]: pam_unix(sudo:session): session closed for user root
Jan 23 11:38:25 compute-0 sudo[188188]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fplsmywandyzmeejejtzjrajdlvbgpzk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168305.0284836-155-49173647477663/AnsiballZ_user.py'
Jan 23 11:38:25 compute-0 sudo[188188]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:38:25 compute-0 python3.9[188190]: ansible-ansible.builtin.user Invoked with comment=ceilometer user group=ceilometer groups=['libvirt'] name=ceilometer shell=/sbin/nologin state=present uid=42405 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 23 11:38:25 compute-0 useradd[188192]: new user: name=ceilometer, UID=42405, GID=42405, home=/home/ceilometer, shell=/sbin/nologin, from=/dev/pts/0
Jan 23 11:38:25 compute-0 useradd[188192]: add 'ceilometer' to group 'libvirt'
Jan 23 11:38:25 compute-0 useradd[188192]: add 'ceilometer' to shadow group 'libvirt'
Jan 23 11:38:25 compute-0 sudo[188188]: pam_unix(sudo:session): session closed for user root
Jan 23 11:38:27 compute-0 python3.9[188348]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry/ceilometer.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:38:27 compute-0 python3.9[188469]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry/ceilometer.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1769168306.670129-181-46146298930037/.source.conf _original_basename=ceilometer.conf follow=False checksum=806b21daa538a66a80669be8bf74c414d178dfbc backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:38:28 compute-0 python3.9[188619]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry/polling.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:38:28 compute-0 python3.9[188740]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry/polling.yaml mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1769168307.9105275-181-139646348899458/.source.yaml _original_basename=polling.yaml follow=False checksum=6c8680a286285f2e0ef9fa528ca754765e5ed0e5 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:38:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:38:29.077 106832 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 11:38:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:38:29.078 106832 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 11:38:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:38:29.078 106832 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 11:38:29 compute-0 python3.9[188890]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry/custom.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:38:29 compute-0 python3.9[189011]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry/custom.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1769168308.9122355-181-89076104331570/.source.conf _original_basename=custom.conf follow=False checksum=838b8b0a7d7f72e55ab67d39f32e3cb3eca2139b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:38:30 compute-0 python3.9[189161]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.crt follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 11:38:30 compute-0 python3.9[189313]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.key follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 11:38:31 compute-0 python3.9[189465]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:38:32 compute-0 python3.9[189586]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry/ceilometer-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769168311.1673698-240-82416199228168/.source.conf follow=False _original_basename=ceilometer-host-specific.conf.j2 checksum=e86e0e43000ce9ccfe5aefbf8e8f2e3d15d05584 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 23 11:38:32 compute-0 python3.9[189736]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry/openstack_network_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:38:33 compute-0 python3.9[189857]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry/openstack_network_exporter.yaml mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769168312.318573-240-113349605213875/.source.yaml follow=False _original_basename=openstack_network_exporter.yaml.j2 checksum=87dede51a10e22722618c1900db75cb764463d91 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 23 11:38:33 compute-0 python3.9[190007]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry/firewall.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:38:34 compute-0 podman[190102]: 2026-01-23 11:38:34.145051296 +0000 UTC m=+0.056483475 container health_status d96827cd9c29e53bbdf4cef10942608e4ba405294733072b4aa624c0238e2ed8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 23 11:38:34 compute-0 python3.9[190141]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry/firewall.yaml mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769168313.4106305-269-114636349134863/.source.yaml _original_basename=firewall.yaml follow=False checksum=d942d984493b214bda2913f753ff68cdcedff00e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 23 11:38:35 compute-0 python3.9[190297]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry/node_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:38:35 compute-0 python3.9[190418]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry/node_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1769168314.6903944-285-218542783189365/.source.yaml _original_basename=node_exporter.yaml follow=False checksum=81d906d3e1e8c4f8367276f5d3a67b80ca7e989e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:38:36 compute-0 podman[190542]: 2026-01-23 11:38:36.310419929 +0000 UTC m=+0.142988652 container health_status 1cc877fed4914980324cf4c0d6ba23743fd113442cee4d49cc1a59e402757170 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 23 11:38:36 compute-0 python3.9[190585]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry/podman_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:38:36 compute-0 python3.9[190716]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry/podman_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1769168315.8654523-300-113370671120015/.source.yaml _original_basename=podman_exporter.yaml follow=False checksum=7ccb5eca2ff1dc337c3f3ecbbff5245af7149c47 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:38:37 compute-0 python3.9[190866]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:38:38 compute-0 python3.9[190987]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1769168317.163439-315-280886313029600/.source.yaml _original_basename=ceilometer_prom_exporter.yaml follow=False checksum=10157c879411ee6023e506dc85a343cedc52700f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:38:38 compute-0 sudo[191137]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oiotcuhfxmgxdywgaavsqhqouhpdottz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168318.4303145-330-258221371030892/AnsiballZ_file.py'
Jan 23 11:38:38 compute-0 sudo[191137]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:38:39 compute-0 python3.9[191139]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry/default/tls.crt recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:38:39 compute-0 sudo[191137]: pam_unix(sudo:session): session closed for user root
Jan 23 11:38:39 compute-0 sudo[191289]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xjeneabdgsjtikmtodgnmzpydodhpzyv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168319.2715313-338-155510727187447/AnsiballZ_file.py'
Jan 23 11:38:39 compute-0 sudo[191289]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:38:39 compute-0 python3.9[191291]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry/default/tls.key recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:38:39 compute-0 sudo[191289]: pam_unix(sudo:session): session closed for user root
Jan 23 11:38:40 compute-0 python3.9[191441]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 11:38:41 compute-0 python3.9[191593]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.crt follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 11:38:41 compute-0 python3.9[191745]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.key follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 11:38:42 compute-0 sudo[191897]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fwgbfkesdqzyvwupssplwzgdjromylsr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168322.1091344-370-68339891177937/AnsiballZ_file.py'
Jan 23 11:38:42 compute-0 sudo[191897]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:38:42 compute-0 python3.9[191899]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 11:38:42 compute-0 sudo[191897]: pam_unix(sudo:session): session closed for user root
Jan 23 11:38:43 compute-0 sudo[192049]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ytzbgnldtfrkiudcokhruokixofieqkg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168322.8009932-378-253993482409190/AnsiballZ_systemd_service.py'
Jan 23 11:38:43 compute-0 sudo[192049]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:38:43 compute-0 python3.9[192051]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=podman.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 11:38:43 compute-0 systemd[1]: Reloading.
Jan 23 11:38:43 compute-0 systemd-rc-local-generator[192079]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 11:38:43 compute-0 systemd-sysv-generator[192084]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 11:38:44 compute-0 systemd[1]: Listening on Podman API Socket.
Jan 23 11:38:44 compute-0 sudo[192049]: pam_unix(sudo:session): session closed for user root
Jan 23 11:38:44 compute-0 sudo[192240]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-janwqryncedamqbmbvormwwfkqdiiuzd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168324.3926392-387-108743665714125/AnsiballZ_stat.py'
Jan 23 11:38:44 compute-0 sudo[192240]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:38:44 compute-0 python3.9[192242]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_compute/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:38:44 compute-0 sudo[192240]: pam_unix(sudo:session): session closed for user root
Jan 23 11:38:45 compute-0 rsyslogd[1006]: imjournal: 1856 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Jan 23 11:38:45 compute-0 sudo[192363]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hhxwwcdmnilxdkfbfkwgmssmauqzvtay ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168324.3926392-387-108743665714125/AnsiballZ_copy.py'
Jan 23 11:38:45 compute-0 sudo[192363]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:38:45 compute-0 python3.9[192365]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_compute/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769168324.3926392-387-108743665714125/.source _original_basename=healthcheck follow=False checksum=ebb343c21fce35a02591a9351660cb7035a47d42 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 23 11:38:45 compute-0 sudo[192363]: pam_unix(sudo:session): session closed for user root
Jan 23 11:38:45 compute-0 sudo[192439]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wzyfxlutrxjynoiydqfqqkixdscogrro ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168324.3926392-387-108743665714125/AnsiballZ_stat.py'
Jan 23 11:38:45 compute-0 sudo[192439]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:38:45 compute-0 python3.9[192441]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_compute/healthcheck.future follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:38:45 compute-0 sudo[192439]: pam_unix(sudo:session): session closed for user root
Jan 23 11:38:46 compute-0 sudo[192562]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-homvtuvocqrwtandxmawtnzewrzjrkiq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168324.3926392-387-108743665714125/AnsiballZ_copy.py'
Jan 23 11:38:46 compute-0 sudo[192562]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:38:46 compute-0 python3.9[192564]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_compute/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769168324.3926392-387-108743665714125/.source.future _original_basename=healthcheck.future follow=False checksum=d500a98192f4ddd70b4dfdc059e2d81aed36a294 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 23 11:38:46 compute-0 sudo[192562]: pam_unix(sudo:session): session closed for user root
Jan 23 11:38:47 compute-0 sudo[192714]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lriytffrfwncqhnqcnolbwgdesahqkrk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168327.0472238-419-119424663726210/AnsiballZ_file.py'
Jan 23 11:38:47 compute-0 sudo[192714]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:38:47 compute-0 python3.9[192716]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:38:47 compute-0 sudo[192714]: pam_unix(sudo:session): session closed for user root
Jan 23 11:38:48 compute-0 sudo[192866]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-htsjcjkfpxwqkeblktpgqhcjleupjwyk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168327.6926405-427-124560659608125/AnsiballZ_file.py'
Jan 23 11:38:48 compute-0 sudo[192866]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:38:48 compute-0 python3.9[192868]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 23 11:38:48 compute-0 sudo[192866]: pam_unix(sudo:session): session closed for user root
Jan 23 11:38:48 compute-0 sudo[193018]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bnvncyddldwnnjomzhmnnkixbhdszvrf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168328.5476382-435-167110809718075/AnsiballZ_stat.py'
Jan 23 11:38:48 compute-0 sudo[193018]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:38:48 compute-0 python3.9[193020]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ceilometer_agent_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:38:48 compute-0 sudo[193018]: pam_unix(sudo:session): session closed for user root
Jan 23 11:38:49 compute-0 sudo[193141]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rayveqzjqtmofhmbmbavzuofsffpvkni ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168328.5476382-435-167110809718075/AnsiballZ_copy.py'
Jan 23 11:38:49 compute-0 sudo[193141]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:38:49 compute-0 python3.9[193143]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ceilometer_agent_compute.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769168328.5476382-435-167110809718075/.source.json _original_basename=.wazv922f follow=False checksum=ce2b0c83293a970bafffa087afa083dd7c93a79c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:38:49 compute-0 sudo[193141]: pam_unix(sudo:session): session closed for user root
Jan 23 11:38:50 compute-0 python3.9[193293]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ceilometer_agent_compute state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:38:52 compute-0 sudo[193714]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ghgpsopfqlzbxcydxzaglhlxtbhknplx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168331.7833233-475-218543912784788/AnsiballZ_container_config_data.py'
Jan 23 11:38:52 compute-0 sudo[193714]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:38:52 compute-0 python3.9[193716]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ceilometer_agent_compute config_pattern=*.json debug=False
Jan 23 11:38:52 compute-0 sudo[193714]: pam_unix(sudo:session): session closed for user root
Jan 23 11:38:53 compute-0 sudo[193866]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pjikzgvikiqqjtgsgemcpxmbfnuixacl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168332.8574333-486-108868413399529/AnsiballZ_container_config_hash.py'
Jan 23 11:38:53 compute-0 sudo[193866]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:38:53 compute-0 python3.9[193868]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 23 11:38:53 compute-0 sudo[193866]: pam_unix(sudo:session): session closed for user root
Jan 23 11:38:54 compute-0 sudo[194018]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ezjxxmaqgyjrgnlhfadyxtruewgpxmsb ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769168333.9339728-496-216015751455650/AnsiballZ_edpm_container_manage.py'
Jan 23 11:38:54 compute-0 sudo[194018]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:38:54 compute-0 python3[194020]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ceilometer_agent_compute config_id=ceilometer_agent_compute config_overrides={} config_patterns=*.json containers=['ceilometer_agent_compute'] log_base_path=/var/log/containers/stdouts debug=False
Jan 23 11:38:54 compute-0 podman[194056]: 2026-01-23 11:38:54.948122978 +0000 UTC m=+0.056782113 container create 6ec039018dddd109dd56b3f3912ce4a80c166b5fb98c417c5e3cfbbdfbfbeaad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.build-date=20260120, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, org.label-schema.name=CentOS Stream 10 Base Image, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.4)
Jan 23 11:38:54 compute-0 podman[194056]: 2026-01-23 11:38:54.915506841 +0000 UTC m=+0.024165976 image pull 673eb625b19e37533ec15e219000c7d8233802c3ffa5adfdd7e8765ce31baf5c quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested
Jan 23 11:38:54 compute-0 python3[194020]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ceilometer_agent_compute --conmon-pidfile /run/ceilometer_agent_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env OS_ENDPOINT_TYPE=internal --env EDPM_CONFIG_HASH=4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595 --healthcheck-command /openstack/healthcheck compute --label config_id=ceilometer_agent_compute --label container_name=ceilometer_agent_compute --label managed_by=edpm_ansible --label config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']} --log-driver journald --log-level info --network host --security-opt label:type:ceilometer_polling_t --user ceilometer --volume /var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z --volume /var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z --volume /run/libvirt:/run/libvirt:shared,ro --volume /etc/hosts:/etc/hosts:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z --volume /dev/log:/dev/log --volume /var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested kolla_start
Jan 23 11:38:55 compute-0 sudo[194018]: pam_unix(sudo:session): session closed for user root
Jan 23 11:38:55 compute-0 sudo[194243]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xopxquqiddmuesidxzjvveqdvomgtmhf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168335.2796457-504-60402410282168/AnsiballZ_stat.py'
Jan 23 11:38:55 compute-0 sudo[194243]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:38:55 compute-0 python3.9[194245]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 11:38:55 compute-0 sudo[194243]: pam_unix(sudo:session): session closed for user root
Jan 23 11:38:56 compute-0 sudo[194397]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rtnlunkecincjxfcywuxglimktfeupat ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168336.1415534-513-8129491686487/AnsiballZ_file.py'
Jan 23 11:38:56 compute-0 sudo[194397]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:38:56 compute-0 python3.9[194399]: ansible-file Invoked with path=/etc/systemd/system/edpm_ceilometer_agent_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:38:56 compute-0 sudo[194397]: pam_unix(sudo:session): session closed for user root
Jan 23 11:38:56 compute-0 sudo[194473]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lcqtvlzhfzhwpwznyzhhcpiadtpwiehw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168336.1415534-513-8129491686487/AnsiballZ_stat.py'
Jan 23 11:38:56 compute-0 sudo[194473]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:38:57 compute-0 python3.9[194475]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ceilometer_agent_compute_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 11:38:57 compute-0 sudo[194473]: pam_unix(sudo:session): session closed for user root
Jan 23 11:38:57 compute-0 sudo[194624]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ogldwrhngemfjgxmyuluqheljzvsddzk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168337.1436853-513-265131061889982/AnsiballZ_copy.py'
Jan 23 11:38:57 compute-0 sudo[194624]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:38:57 compute-0 python3.9[194626]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769168337.1436853-513-265131061889982/source dest=/etc/systemd/system/edpm_ceilometer_agent_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:38:57 compute-0 sudo[194624]: pam_unix(sudo:session): session closed for user root
Jan 23 11:38:58 compute-0 sudo[194700]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kjlhbvzaeqbylyojueurqhfqhlwpsgzt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168337.1436853-513-265131061889982/AnsiballZ_systemd.py'
Jan 23 11:38:58 compute-0 sudo[194700]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:38:58 compute-0 python3.9[194702]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 23 11:38:58 compute-0 systemd[1]: Reloading.
Jan 23 11:38:59 compute-0 systemd-sysv-generator[194732]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 11:38:59 compute-0 systemd-rc-local-generator[194729]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 11:38:59 compute-0 sudo[194700]: pam_unix(sudo:session): session closed for user root
Jan 23 11:38:59 compute-0 sudo[194811]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aypfvacslxhhjxqquswazlqjbzmnwwky ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168337.1436853-513-265131061889982/AnsiballZ_systemd.py'
Jan 23 11:38:59 compute-0 sudo[194811]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:38:59 compute-0 python3.9[194813]: ansible-systemd Invoked with state=restarted name=edpm_ceilometer_agent_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 11:38:59 compute-0 systemd[1]: Reloading.
Jan 23 11:38:59 compute-0 systemd-sysv-generator[194843]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 11:38:59 compute-0 systemd-rc-local-generator[194840]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 11:39:00 compute-0 systemd[1]: Starting ceilometer_agent_compute container...
Jan 23 11:39:00 compute-0 systemd[1]: Started libcrun container.
Jan 23 11:39:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b64d45047802b2b04b4ab86a92188cc6cc8b06ce3f3baec134a8678f24be255/merged/etc/ceilometer/tls supports timestamps until 2038 (0x7fffffff)
Jan 23 11:39:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b64d45047802b2b04b4ab86a92188cc6cc8b06ce3f3baec134a8678f24be255/merged/etc/ceilometer/ceilometer_prom_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Jan 23 11:39:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b64d45047802b2b04b4ab86a92188cc6cc8b06ce3f3baec134a8678f24be255/merged/var/lib/kolla/config_files/src supports timestamps until 2038 (0x7fffffff)
Jan 23 11:39:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b64d45047802b2b04b4ab86a92188cc6cc8b06ce3f3baec134a8678f24be255/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Jan 23 11:39:00 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 6ec039018dddd109dd56b3f3912ce4a80c166b5fb98c417c5e3cfbbdfbfbeaad.
Jan 23 11:39:00 compute-0 podman[194853]: 2026-01-23 11:39:00.200533771 +0000 UTC m=+0.109330518 container init 6ec039018dddd109dd56b3f3912ce4a80c166b5fb98c417c5e3cfbbdfbfbeaad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20260120, org.label-schema.license=GPLv2, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Jan 23 11:39:00 compute-0 ceilometer_agent_compute[194869]: + sudo -E kolla_set_configs
Jan 23 11:39:00 compute-0 sudo[194875]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Jan 23 11:39:00 compute-0 ceilometer_agent_compute[194869]: sudo: unable to send audit message: Operation not permitted
Jan 23 11:39:00 compute-0 sudo[194875]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Jan 23 11:39:00 compute-0 podman[194853]: 2026-01-23 11:39:00.223741852 +0000 UTC m=+0.132538589 container start 6ec039018dddd109dd56b3f3912ce4a80c166b5fb98c417c5e3cfbbdfbfbeaad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260120, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Jan 23 11:39:00 compute-0 podman[194853]: ceilometer_agent_compute
Jan 23 11:39:00 compute-0 systemd[1]: Started ceilometer_agent_compute container.
Jan 23 11:39:00 compute-0 ceilometer_agent_compute[194869]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 23 11:39:00 compute-0 ceilometer_agent_compute[194869]: INFO:__main__:Validating config file
Jan 23 11:39:00 compute-0 ceilometer_agent_compute[194869]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 23 11:39:00 compute-0 ceilometer_agent_compute[194869]: INFO:__main__:Copying service configuration files
Jan 23 11:39:00 compute-0 sudo[194811]: pam_unix(sudo:session): session closed for user root
Jan 23 11:39:00 compute-0 ceilometer_agent_compute[194869]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Jan 23 11:39:00 compute-0 ceilometer_agent_compute[194869]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Jan 23 11:39:00 compute-0 ceilometer_agent_compute[194869]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Jan 23 11:39:00 compute-0 ceilometer_agent_compute[194869]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Jan 23 11:39:00 compute-0 ceilometer_agent_compute[194869]: INFO:__main__:Copying /var/lib/kolla/config_files/src/polling.yaml to /etc/ceilometer/polling.yaml
Jan 23 11:39:00 compute-0 ceilometer_agent_compute[194869]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Jan 23 11:39:00 compute-0 ceilometer_agent_compute[194869]: INFO:__main__:Copying /var/lib/kolla/config_files/src/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Jan 23 11:39:00 compute-0 ceilometer_agent_compute[194869]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Jan 23 11:39:00 compute-0 ceilometer_agent_compute[194869]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Jan 23 11:39:00 compute-0 ceilometer_agent_compute[194869]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Jan 23 11:39:00 compute-0 ceilometer_agent_compute[194869]: INFO:__main__:Writing out command to execute
Jan 23 11:39:00 compute-0 sudo[194875]: pam_unix(sudo:session): session closed for user root
Jan 23 11:39:00 compute-0 ceilometer_agent_compute[194869]: ++ cat /run_command
Jan 23 11:39:00 compute-0 ceilometer_agent_compute[194869]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Jan 23 11:39:00 compute-0 ceilometer_agent_compute[194869]: + ARGS=
Jan 23 11:39:00 compute-0 ceilometer_agent_compute[194869]: + sudo kolla_copy_cacerts
Jan 23 11:39:00 compute-0 sudo[194890]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Jan 23 11:39:00 compute-0 ceilometer_agent_compute[194869]: sudo: unable to send audit message: Operation not permitted
Jan 23 11:39:00 compute-0 sudo[194890]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Jan 23 11:39:00 compute-0 sudo[194890]: pam_unix(sudo:session): session closed for user root
Jan 23 11:39:00 compute-0 ceilometer_agent_compute[194869]: + [[ ! -n '' ]]
Jan 23 11:39:00 compute-0 ceilometer_agent_compute[194869]: + . kolla_extend_start
Jan 23 11:39:00 compute-0 ceilometer_agent_compute[194869]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'\'''
Jan 23 11:39:00 compute-0 ceilometer_agent_compute[194869]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Jan 23 11:39:00 compute-0 ceilometer_agent_compute[194869]: + umask 0022
Jan 23 11:39:00 compute-0 ceilometer_agent_compute[194869]: + exec /usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout
Jan 23 11:39:00 compute-0 podman[194876]: 2026-01-23 11:39:00.310977917 +0000 UTC m=+0.072855536 container health_status 6ec039018dddd109dd56b3f3912ce4a80c166b5fb98c417c5e3cfbbdfbfbeaad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=starting, health_failing_streak=1, health_log=, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20260120, tcib_managed=true, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute)
Jan 23 11:39:00 compute-0 systemd[1]: 6ec039018dddd109dd56b3f3912ce4a80c166b5fb98c417c5e3cfbbdfbfbeaad-2862708b4ef1e2fd.service: Main process exited, code=exited, status=1/FAILURE
Jan 23 11:39:00 compute-0 systemd[1]: 6ec039018dddd109dd56b3f3912ce4a80c166b5fb98c417c5e3cfbbdfbfbeaad-2862708b4ef1e2fd.service: Failed with result 'exit-code'.
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.042 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:45
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.043 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.043 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.043 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.043 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.043 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.043 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.043 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.043 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.043 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.043 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.043 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.044 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.044 2 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.044 2 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.044 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.044 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.044 2 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.044 2 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.044 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.044 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.045 2 WARNING oslo_config.cfg [-] Deprecated: Option "tenant_name_discovery" from group "DEFAULT" is deprecated. Use option "identity_name_discovery" from group "DEFAULT".
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.045 2 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.045 2 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.045 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.045 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.045 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.045 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.045 2 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.045 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.045 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.045 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.045 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.046 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.046 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.046 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.046 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.046 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.046 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.046 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.046 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.046 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.046 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.046 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.046 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.046 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.046 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.047 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.047 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.047 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.047 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.047 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.047 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.047 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.047 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.047 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.047 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.047 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.047 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.048 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.048 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.048 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.048 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.048 2 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.048 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.048 2 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.048 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.048 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.048 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.048 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.048 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.049 2 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.049 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.049 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.049 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.049 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.049 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.049 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.049 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.049 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.049 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.049 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.050 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.050 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.050 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.050 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.050 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.050 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.050 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.050 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.050 2 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.050 2 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.050 2 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.051 2 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.051 2 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.051 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.051 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.051 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.051 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.051 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.051 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.051 2 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.051 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.051 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.051 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.052 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.052 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.052 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.052 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.052 2 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.052 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.052 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.052 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.052 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.052 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.053 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.053 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.053 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.053 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.053 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.053 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.053 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.053 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.053 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.053 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.053 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.053 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.053 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.054 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.054 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.054 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.054 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.054 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.054 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.054 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.054 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.054 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.054 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.054 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.054 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.055 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.055 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.055 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.055 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.055 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.055 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.055 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.055 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.055 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.055 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.055 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.055 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.055 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.074 12 INFO ceilometer.polling.manager [-] Starting heartbeat child service. Listening on /var/lib/ceilometer/ceilometer-compute.socket
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.074 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:53
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.075 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.075 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.075 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.075 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.075 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.075 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.075 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.075 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.075 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.075 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.076 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.076 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.076 12 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.076 12 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.076 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.076 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.076 12 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.076 12 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.076 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.076 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.076 12 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.077 12 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.077 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.077 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.077 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.077 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.077 12 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.077 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.077 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.077 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.077 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.077 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.078 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.078 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.078 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.078 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.078 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.078 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.078 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.078 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.078 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.078 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.078 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.079 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.079 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.079 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.079 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.079 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.079 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.079 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.079 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.079 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.079 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.079 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.079 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.080 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.080 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.080 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.080 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.080 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.080 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.080 12 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.080 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.080 12 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.080 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.080 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.080 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.081 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.081 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.081 12 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.081 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.081 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.081 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.081 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.081 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.081 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.081 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.081 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.082 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.082 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.082 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.082 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.082 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.082 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.082 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.082 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.082 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.082 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.082 12 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.083 12 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.083 12 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.083 12 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.083 12 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.083 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.083 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.083 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.083 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.083 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.083 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.083 12 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.083 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.084 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.084 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.084 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.084 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.084 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.084 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.084 12 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.084 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.084 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.084 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.084 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.084 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.085 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.085 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.085 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.085 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.085 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.085 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.085 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.085 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.085 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.085 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.085 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.086 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.086 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.086 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.086 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.086 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.086 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.086 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.086 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.086 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.086 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.086 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.086 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.086 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.087 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.087 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.087 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.087 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.087 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.087 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.087 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.087 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.087 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.087 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.087 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.087 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.087 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.088 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.088 12 DEBUG cotyledon._service [-] Run service AgentHeartBeatManager(0) [12] wait_forever /usr/lib/python3.12/site-packages/cotyledon/_service.py:263
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.089 12 DEBUG ceilometer.polling.manager [-] Started heartbeat child process. run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:519
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.091 12 DEBUG ceilometer.polling.manager [-] Started heartbeat update thread _read_queue /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:522
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.091 12 DEBUG ceilometer.polling.manager [-] Started heartbeat reporting thread _report_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:527
Jan 23 11:39:01 compute-0 python3.9[195051]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 23 11:39:01 compute-0 nova_compute[185173]: 2026-01-23 11:39:01.236 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:39:01 compute-0 nova_compute[185173]: 2026-01-23 11:39:01.237 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:39:01 compute-0 nova_compute[185173]: 2026-01-23 11:39:01.237 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 23 11:39:01 compute-0 nova_compute[185173]: 2026-01-23 11:39:01.237 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 23 11:39:01 compute-0 nova_compute[185173]: 2026-01-23 11:39:01.252 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 23 11:39:01 compute-0 nova_compute[185173]: 2026-01-23 11:39:01.252 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:39:01 compute-0 nova_compute[185173]: 2026-01-23 11:39:01.253 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:39:01 compute-0 nova_compute[185173]: 2026-01-23 11:39:01.254 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:39:01 compute-0 nova_compute[185173]: 2026-01-23 11:39:01.254 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:39:01 compute-0 nova_compute[185173]: 2026-01-23 11:39:01.255 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:39:01 compute-0 nova_compute[185173]: 2026-01-23 11:39:01.255 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:39:01 compute-0 nova_compute[185173]: 2026-01-23 11:39:01.256 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 23 11:39:01 compute-0 nova_compute[185173]: 2026-01-23 11:39:01.256 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.283 14 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/utils.py:96
Jan 23 11:39:01 compute-0 nova_compute[185173]: 2026-01-23 11:39:01.291 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 11:39:01 compute-0 nova_compute[185173]: 2026-01-23 11:39:01.292 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 11:39:01 compute-0 nova_compute[185173]: 2026-01-23 11:39:01.292 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 11:39:01 compute-0 nova_compute[185173]: 2026-01-23 11:39:01.292 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.293 14 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.294 14 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.294 14 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.421 14 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:53
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.422 14 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.422 14 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.422 14 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.422 14 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.422 14 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.422 14 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.422 14 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.422 14 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.422 14 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.422 14 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.422 14 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.423 14 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.423 14 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.423 14 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.423 14 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.423 14 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.423 14 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.423 14 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.423 14 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.423 14 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.424 14 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.424 14 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.424 14 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.424 14 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.424 14 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.424 14 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.424 14 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.424 14 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.424 14 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.424 14 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.424 14 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.424 14 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.425 14 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.425 14 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.425 14 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.425 14 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.425 14 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.425 14 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.425 14 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.425 14 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.425 14 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.425 14 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.425 14 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.425 14 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.426 14 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.426 14 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.426 14 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.426 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.426 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.426 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.426 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.426 14 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.426 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.426 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.426 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.427 14 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.427 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.427 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.427 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.427 14 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.427 14 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.427 14 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.427 14 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.427 14 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.427 14 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.427 14 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.427 14 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.427 14 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.428 14 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.428 14 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.428 14 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.428 14 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.428 14 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.428 14 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.428 14 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.428 14 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.428 14 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.428 14 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.428 14 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.428 14 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.429 14 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.429 14 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.429 14 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.429 14 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.429 14 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.429 14 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.429 14 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.429 14 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.429 14 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.430 14 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.430 14 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.430 14 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.430 14 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.430 14 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.430 14 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.430 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.430 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.430 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.430 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.430 14 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.430 14 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.430 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.431 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.431 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.431 14 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.431 14 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.431 14 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.431 14 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.431 14 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.431 14 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.431 14 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.431 14 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.431 14 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.431 14 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.431 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.432 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.432 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_url   = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.432 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.432 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.432 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.432 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.432 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.432 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_id  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.432 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.432 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.432 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.432 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.432 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.password   = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.432 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.432 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_name = Default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.432 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.432 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_name = service log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.432 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.432 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.433 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.system_scope = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.433 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.433 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.trust_id   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.433 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.433 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_name = Default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.433 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_id    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.433 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.username   = ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.433 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.433 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.433 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.433 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.433 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.433 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.433 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.433 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.434 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.434 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.434 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.434 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.434 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.434 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.434 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.434 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.434 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 nova_compute[185173]: 2026-01-23 11:39:01.434 185177 WARNING nova.virt.libvirt.driver [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.434 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.434 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.434 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.434 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.434 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.435 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.435 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.435 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.435 14 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.435 14 DEBUG cotyledon._service [-] Run service AgentManager(0) [14] wait_forever /usr/lib/python3.12/site-packages/cotyledon/_service.py:263
Jan 23 11:39:01 compute-0 nova_compute[185173]: 2026-01-23 11:39:01.435 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5928MB free_disk=72.6475944519043GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 23 11:39:01 compute-0 nova_compute[185173]: 2026-01-23 11:39:01.435 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 11:39:01 compute-0 nova_compute[185173]: 2026-01-23 11:39:01.435 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.436 14 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['power.state', 'cpu', 'memory.usage', 'disk.*', 'network.*']}]} load_config /usr/lib/python3.12/site-packages/ceilometer/agent.py:64
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.448 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.448 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.448 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2842f83470>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.449 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f28410bc7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.449 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be810>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2842f83470>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.449 14 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/utils.py:96
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.449 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be840>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2842f83470>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.450 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2842f83470>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.450 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be8a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2842f83470>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.450 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc8f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2842f83470>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.450 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be900>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2842f83470>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.450 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bf140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2842f83470>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.450 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be960>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2842f83470>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.450 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2842f61190>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2842f83470>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.450 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28411c9190>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2842f83470>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.450 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be9c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2842f83470>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.451 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bf1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2842f83470>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.451 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bec00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2842f83470>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.451 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bf440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2842f83470>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.451 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bec60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2842f83470>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.451 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2842f83560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2842f83470>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.451 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2842f83470>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.451 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc5c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2842f83470>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.451 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2842f83470>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.451 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be660>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2842f83470>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.451 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2842f83470>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.451 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2842f83470>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.451 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2842f1af60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2842f83470>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.451 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2842f83470>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.452 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be7b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2842f83470>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.452 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.452 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f28410be7e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.452 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.452 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f28411c9b80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.453 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.453 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f28410bc830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.453 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.453 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f28410be870>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.453 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.453 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f28410bc8c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.453 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.453 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f28410be8d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.453 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.453 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f28410bef30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.453 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.453 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f28410be930>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.453 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.454 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f28410be750>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.454 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.454 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f28411a4c50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.454 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.454 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f28410be990>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.454 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.454 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f28410bf1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.454 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.454 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f28410bebd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.454 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.454 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f28410bf410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.454 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.454 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f28410bec30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.455 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.455 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f28410bcfb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.455 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.455 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f28410bc920>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.455 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.455 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f28410bc5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.455 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.455 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f28410bc890>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.455 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.455 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f28410be720>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.455 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.455 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f28410bc6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.455 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.455 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f28410bec90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.456 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.456 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f284322b260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.456 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.456 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f28410bc740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.456 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.456 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f28410be780>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.456 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.456 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.456 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.456 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.456 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.456 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.456 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.457 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.457 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.457 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.457 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.457 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.457 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.457 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.457 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.457 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.457 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.457 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.457 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.457 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.457 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.457 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.457 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.457 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.458 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.458 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:39:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:39:01.458 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:39:01 compute-0 nova_compute[185173]: 2026-01-23 11:39:01.548 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 23 11:39:01 compute-0 nova_compute[185173]: 2026-01-23 11:39:01.548 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 23 11:39:01 compute-0 nova_compute[185173]: 2026-01-23 11:39:01.574 185177 DEBUG nova.compute.provider_tree [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Inventory has not changed in ProviderTree for provider: 77dd020c-2f5c-40b0-b660-8a95a28aabbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 23 11:39:01 compute-0 nova_compute[185173]: 2026-01-23 11:39:01.587 185177 DEBUG nova.scheduler.client.report [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Inventory has not changed for provider 77dd020c-2f5c-40b0-b660-8a95a28aabbd based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 23 11:39:01 compute-0 nova_compute[185173]: 2026-01-23 11:39:01.588 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 23 11:39:01 compute-0 nova_compute[185173]: 2026-01-23 11:39:01.589 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.154s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 11:39:01 compute-0 anacron[29940]: Job `cron.daily' started
Jan 23 11:39:01 compute-0 anacron[29940]: Job `cron.daily' terminated
Jan 23 11:39:01 compute-0 sudo[195216]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xktxnoabpixcouktnfuafpqbybmcposi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168341.7154572-558-157067501606291/AnsiballZ_stat.py'
Jan 23 11:39:01 compute-0 sudo[195216]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:39:02 compute-0 python3.9[195218]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:39:02 compute-0 sudo[195216]: pam_unix(sudo:session): session closed for user root
Jan 23 11:39:02 compute-0 sudo[195341]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hrrqktzfcclizxspzsylesxpvlvnzwxy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168341.7154572-558-157067501606291/AnsiballZ_copy.py'
Jan 23 11:39:02 compute-0 sudo[195341]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:39:02 compute-0 python3.9[195343]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769168341.7154572-558-157067501606291/.source.yaml _original_basename=.bm6m3kdx follow=False checksum=9ff17cf2f0661c94624db8c0c2084c8f47b5f911 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:39:02 compute-0 sudo[195341]: pam_unix(sudo:session): session closed for user root
Jan 23 11:39:03 compute-0 sudo[195493]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sddadtpaievlzniqodxykorkuyvctmoa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168342.8659692-573-18574509078000/AnsiballZ_stat.py'
Jan 23 11:39:03 compute-0 sudo[195493]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:39:03 compute-0 python3.9[195495]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/node_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:39:03 compute-0 sudo[195493]: pam_unix(sudo:session): session closed for user root
Jan 23 11:39:03 compute-0 sudo[195616]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mjnqkrllfrmhptftzukoiyycgvjwobbd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168342.8659692-573-18574509078000/AnsiballZ_copy.py'
Jan 23 11:39:03 compute-0 sudo[195616]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:39:03 compute-0 python3.9[195618]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/node_exporter/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769168342.8659692-573-18574509078000/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 23 11:39:03 compute-0 sudo[195616]: pam_unix(sudo:session): session closed for user root
Jan 23 11:39:04 compute-0 sudo[195777]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjpfkobnkjckklxhdffxlohkulzfmdmw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168344.4107761-594-86182324552197/AnsiballZ_file.py'
Jan 23 11:39:04 compute-0 sudo[195777]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:39:04 compute-0 podman[195742]: 2026-01-23 11:39:04.66202964 +0000 UTC m=+0.053843129 container health_status d96827cd9c29e53bbdf4cef10942608e4ba405294733072b4aa624c0238e2ed8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 23 11:39:04 compute-0 python3.9[195785]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:39:04 compute-0 sudo[195777]: pam_unix(sudo:session): session closed for user root
Jan 23 11:39:05 compute-0 sudo[195937]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tcesfprynlxsifbipfccjmozubfeyuvx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168345.039947-602-4144824660362/AnsiballZ_file.py'
Jan 23 11:39:05 compute-0 sudo[195937]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:39:05 compute-0 python3.9[195939]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 23 11:39:05 compute-0 sudo[195937]: pam_unix(sudo:session): session closed for user root
Jan 23 11:39:05 compute-0 sudo[196089]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-odfgyvgotcpdafmmjwfkqnlsxpdjplxh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168345.6536422-610-34953188711148/AnsiballZ_stat.py'
Jan 23 11:39:05 compute-0 sudo[196089]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:39:06 compute-0 python3.9[196091]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ceilometer_agent_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:39:06 compute-0 sudo[196089]: pam_unix(sudo:session): session closed for user root
Jan 23 11:39:06 compute-0 sudo[196167]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ptnyhucjwggvoqckqkcbzpsmmeobnglv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168345.6536422-610-34953188711148/AnsiballZ_file.py'
Jan 23 11:39:06 compute-0 sudo[196167]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:39:06 compute-0 podman[196169]: 2026-01-23 11:39:06.451070039 +0000 UTC m=+0.072675990 container health_status 1cc877fed4914980324cf4c0d6ba23743fd113442cee4d49cc1a59e402757170 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 23 11:39:06 compute-0 python3.9[196170]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/var/lib/kolla/config_files/ceilometer_agent_compute.json _original_basename=.5_5948tb recurse=False state=file path=/var/lib/kolla/config_files/ceilometer_agent_compute.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:39:06 compute-0 sudo[196167]: pam_unix(sudo:session): session closed for user root
Jan 23 11:39:07 compute-0 python3.9[196346]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/node_exporter state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:39:09 compute-0 sudo[196767]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fxdyqdznpqzuqxpgobpmjxzkrfaywsva ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168348.946564-647-32874167299878/AnsiballZ_container_config_data.py'
Jan 23 11:39:09 compute-0 sudo[196767]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:39:09 compute-0 python3.9[196769]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/node_exporter config_pattern=*.json debug=False
Jan 23 11:39:09 compute-0 sudo[196767]: pam_unix(sudo:session): session closed for user root
Jan 23 11:39:10 compute-0 sudo[196919]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jlrddxaoqgdwqmdeaxybkbxynionydny ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168349.7596686-658-14380044445330/AnsiballZ_container_config_hash.py'
Jan 23 11:39:10 compute-0 sudo[196919]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:39:10 compute-0 python3.9[196921]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 23 11:39:10 compute-0 sudo[196919]: pam_unix(sudo:session): session closed for user root
Jan 23 11:39:10 compute-0 sudo[197071]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-netxfiyimfodbmcicemmvvzrhfnnlcsc ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769168350.5838077-668-111398176709911/AnsiballZ_edpm_container_manage.py'
Jan 23 11:39:10 compute-0 sudo[197071]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:39:11 compute-0 python3[197073]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/node_exporter config_id=node_exporter config_overrides={} config_patterns=*.json containers=['node_exporter'] log_base_path=/var/log/containers/stdouts debug=False
Jan 23 11:39:11 compute-0 podman[197110]: 2026-01-23 11:39:11.390549097 +0000 UTC m=+0.057304486 container create 99ee297e6e25b500e7af118e58bbafc761d2fd7202cdfcf4c976c2a99866b5ef (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, config_id=node_exporter, container_name=node_exporter, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 23 11:39:11 compute-0 podman[197110]: 2026-01-23 11:39:11.367217503 +0000 UTC m=+0.033972922 image pull 0da6a335fe1356545476b749c68f022c897de3a2139e8f0054f6937349ee2b83 quay.io/prometheus/node-exporter:v1.5.0
Jan 23 11:39:11 compute-0 python3[197073]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name node_exporter --conmon-pidfile /run/node_exporter.pid --env OS_ENDPOINT_TYPE=internal --env EDPM_CONFIG_HASH=477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595 --healthcheck-command /openstack/healthcheck node_exporter --label config_id=node_exporter --label container_name=node_exporter --label managed_by=edpm_ansible --label config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9100:9100 --user root --volume /var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z --volume /var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw --volume /var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z quay.io/prometheus/node-exporter:v1.5.0 --web.config.file=/etc/node_exporter/node_exporter.yaml --web.disable-exporter-metrics --collector.systemd --collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\.service --no-collector.dmi --no-collector.entropy --no-collector.thermal_zone --no-collector.time --no-collector.timex --no-collector.uname --no-collector.stat --no-collector.hwmon --no-collector.os --no-collector.selinux --no-collector.textfile --no-collector.powersupplyclass --no-collector.pressure --no-collector.rapl
Jan 23 11:39:11 compute-0 sudo[197071]: pam_unix(sudo:session): session closed for user root
Jan 23 11:39:12 compute-0 sudo[197298]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xdkpnxuoeaoxlyotwwqupwcqvzolaplq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168351.7300344-676-169865614128471/AnsiballZ_stat.py'
Jan 23 11:39:12 compute-0 sudo[197298]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:39:12 compute-0 python3.9[197300]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 11:39:12 compute-0 sudo[197298]: pam_unix(sudo:session): session closed for user root
Jan 23 11:39:12 compute-0 sudo[197452]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pdcxlcpjvukzudssbngckpgafuxzzmgc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168352.7348504-685-39613443452086/AnsiballZ_file.py'
Jan 23 11:39:12 compute-0 sudo[197452]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:39:13 compute-0 python3.9[197454]: ansible-file Invoked with path=/etc/systemd/system/edpm_node_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:39:13 compute-0 sudo[197452]: pam_unix(sudo:session): session closed for user root
Jan 23 11:39:13 compute-0 sudo[197528]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xleznszlsdqqhvckfziqeaxgpmerejfr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168352.7348504-685-39613443452086/AnsiballZ_stat.py'
Jan 23 11:39:13 compute-0 sudo[197528]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:39:13 compute-0 python3.9[197530]: ansible-stat Invoked with path=/etc/systemd/system/edpm_node_exporter_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 11:39:13 compute-0 sudo[197528]: pam_unix(sudo:session): session closed for user root
Jan 23 11:39:14 compute-0 sudo[197679]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-szkhiyhwdbhbvhofqroxoujibcrpbdpb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168353.7283504-685-249763049656510/AnsiballZ_copy.py'
Jan 23 11:39:14 compute-0 sudo[197679]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:39:14 compute-0 python3.9[197681]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769168353.7283504-685-249763049656510/source dest=/etc/systemd/system/edpm_node_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:39:14 compute-0 sudo[197679]: pam_unix(sudo:session): session closed for user root
Jan 23 11:39:14 compute-0 sudo[197755]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sqfwyunnkshfhehphkdammwxqjqbjoys ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168353.7283504-685-249763049656510/AnsiballZ_systemd.py'
Jan 23 11:39:14 compute-0 sudo[197755]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:39:14 compute-0 python3.9[197757]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 23 11:39:14 compute-0 systemd[1]: Reloading.
Jan 23 11:39:14 compute-0 systemd-rc-local-generator[197780]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 11:39:14 compute-0 systemd-sysv-generator[197786]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 11:39:15 compute-0 sudo[197755]: pam_unix(sudo:session): session closed for user root
Jan 23 11:39:15 compute-0 sudo[197866]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rxpvymmqikeltvrilfzejqllxsvluyon ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168353.7283504-685-249763049656510/AnsiballZ_systemd.py'
Jan 23 11:39:15 compute-0 sudo[197866]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:39:15 compute-0 python3.9[197868]: ansible-systemd Invoked with state=restarted name=edpm_node_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 11:39:15 compute-0 systemd[1]: Reloading.
Jan 23 11:39:15 compute-0 systemd-rc-local-generator[197900]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 11:39:15 compute-0 systemd-sysv-generator[197904]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 11:39:16 compute-0 systemd[1]: Starting node_exporter container...
Jan 23 11:39:16 compute-0 systemd[1]: Started libcrun container.
Jan 23 11:39:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fc557bb92691e0c4fb626c4916073609d6b4007472a69325a2b4e0b85163986/merged/etc/node_exporter/node_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Jan 23 11:39:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fc557bb92691e0c4fb626c4916073609d6b4007472a69325a2b4e0b85163986/merged/etc/node_exporter/tls supports timestamps until 2038 (0x7fffffff)
Jan 23 11:39:16 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 99ee297e6e25b500e7af118e58bbafc761d2fd7202cdfcf4c976c2a99866b5ef.
Jan 23 11:39:16 compute-0 podman[197908]: 2026-01-23 11:39:16.171257409 +0000 UTC m=+0.134556480 container init 99ee297e6e25b500e7af118e58bbafc761d2fd7202cdfcf4c976c2a99866b5ef (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 23 11:39:16 compute-0 node_exporter[197923]: ts=2026-01-23T11:39:16.190Z caller=node_exporter.go:180 level=info msg="Starting node_exporter" version="(version=1.5.0, branch=HEAD, revision=1b48970ffcf5630534fb00bb0687d73c66d1c959)"
Jan 23 11:39:16 compute-0 node_exporter[197923]: ts=2026-01-23T11:39:16.190Z caller=node_exporter.go:181 level=info msg="Build context" build_context="(go=go1.19.3, user=root@6e7732a7b81b, date=20221129-18:59:09)"
Jan 23 11:39:16 compute-0 node_exporter[197923]: ts=2026-01-23T11:39:16.190Z caller=node_exporter.go:183 level=warn msg="Node Exporter is running as root user. This exporter is designed to run as unprivileged user, root is not required."
Jan 23 11:39:16 compute-0 node_exporter[197923]: ts=2026-01-23T11:39:16.190Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
Jan 23 11:39:16 compute-0 node_exporter[197923]: ts=2026-01-23T11:39:16.190Z caller=diskstats_linux.go:264 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
Jan 23 11:39:16 compute-0 node_exporter[197923]: ts=2026-01-23T11:39:16.190Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
Jan 23 11:39:16 compute-0 node_exporter[197923]: ts=2026-01-23T11:39:16.190Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
Jan 23 11:39:16 compute-0 node_exporter[197923]: ts=2026-01-23T11:39:16.191Z caller=systemd_linux.go:152 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-include" flag=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\.service
Jan 23 11:39:16 compute-0 node_exporter[197923]: ts=2026-01-23T11:39:16.191Z caller=systemd_linux.go:154 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-exclude" flag=.+\.(automount|device|mount|scope|slice)
Jan 23 11:39:16 compute-0 node_exporter[197923]: ts=2026-01-23T11:39:16.191Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
Jan 23 11:39:16 compute-0 node_exporter[197923]: ts=2026-01-23T11:39:16.191Z caller=node_exporter.go:117 level=info collector=arp
Jan 23 11:39:16 compute-0 node_exporter[197923]: ts=2026-01-23T11:39:16.191Z caller=node_exporter.go:117 level=info collector=bcache
Jan 23 11:39:16 compute-0 node_exporter[197923]: ts=2026-01-23T11:39:16.191Z caller=node_exporter.go:117 level=info collector=bonding
Jan 23 11:39:16 compute-0 node_exporter[197923]: ts=2026-01-23T11:39:16.191Z caller=node_exporter.go:117 level=info collector=btrfs
Jan 23 11:39:16 compute-0 node_exporter[197923]: ts=2026-01-23T11:39:16.191Z caller=node_exporter.go:117 level=info collector=conntrack
Jan 23 11:39:16 compute-0 node_exporter[197923]: ts=2026-01-23T11:39:16.191Z caller=node_exporter.go:117 level=info collector=cpu
Jan 23 11:39:16 compute-0 node_exporter[197923]: ts=2026-01-23T11:39:16.191Z caller=node_exporter.go:117 level=info collector=cpufreq
Jan 23 11:39:16 compute-0 node_exporter[197923]: ts=2026-01-23T11:39:16.191Z caller=node_exporter.go:117 level=info collector=diskstats
Jan 23 11:39:16 compute-0 node_exporter[197923]: ts=2026-01-23T11:39:16.191Z caller=node_exporter.go:117 level=info collector=edac
Jan 23 11:39:16 compute-0 node_exporter[197923]: ts=2026-01-23T11:39:16.191Z caller=node_exporter.go:117 level=info collector=fibrechannel
Jan 23 11:39:16 compute-0 node_exporter[197923]: ts=2026-01-23T11:39:16.191Z caller=node_exporter.go:117 level=info collector=filefd
Jan 23 11:39:16 compute-0 node_exporter[197923]: ts=2026-01-23T11:39:16.191Z caller=node_exporter.go:117 level=info collector=filesystem
Jan 23 11:39:16 compute-0 node_exporter[197923]: ts=2026-01-23T11:39:16.191Z caller=node_exporter.go:117 level=info collector=infiniband
Jan 23 11:39:16 compute-0 node_exporter[197923]: ts=2026-01-23T11:39:16.191Z caller=node_exporter.go:117 level=info collector=ipvs
Jan 23 11:39:16 compute-0 node_exporter[197923]: ts=2026-01-23T11:39:16.191Z caller=node_exporter.go:117 level=info collector=loadavg
Jan 23 11:39:16 compute-0 node_exporter[197923]: ts=2026-01-23T11:39:16.191Z caller=node_exporter.go:117 level=info collector=mdadm
Jan 23 11:39:16 compute-0 node_exporter[197923]: ts=2026-01-23T11:39:16.191Z caller=node_exporter.go:117 level=info collector=meminfo
Jan 23 11:39:16 compute-0 node_exporter[197923]: ts=2026-01-23T11:39:16.191Z caller=node_exporter.go:117 level=info collector=netclass
Jan 23 11:39:16 compute-0 node_exporter[197923]: ts=2026-01-23T11:39:16.191Z caller=node_exporter.go:117 level=info collector=netdev
Jan 23 11:39:16 compute-0 node_exporter[197923]: ts=2026-01-23T11:39:16.191Z caller=node_exporter.go:117 level=info collector=netstat
Jan 23 11:39:16 compute-0 node_exporter[197923]: ts=2026-01-23T11:39:16.191Z caller=node_exporter.go:117 level=info collector=nfs
Jan 23 11:39:16 compute-0 node_exporter[197923]: ts=2026-01-23T11:39:16.191Z caller=node_exporter.go:117 level=info collector=nfsd
Jan 23 11:39:16 compute-0 node_exporter[197923]: ts=2026-01-23T11:39:16.191Z caller=node_exporter.go:117 level=info collector=nvme
Jan 23 11:39:16 compute-0 node_exporter[197923]: ts=2026-01-23T11:39:16.191Z caller=node_exporter.go:117 level=info collector=schedstat
Jan 23 11:39:16 compute-0 node_exporter[197923]: ts=2026-01-23T11:39:16.191Z caller=node_exporter.go:117 level=info collector=sockstat
Jan 23 11:39:16 compute-0 node_exporter[197923]: ts=2026-01-23T11:39:16.191Z caller=node_exporter.go:117 level=info collector=softnet
Jan 23 11:39:16 compute-0 node_exporter[197923]: ts=2026-01-23T11:39:16.191Z caller=node_exporter.go:117 level=info collector=systemd
Jan 23 11:39:16 compute-0 node_exporter[197923]: ts=2026-01-23T11:39:16.191Z caller=node_exporter.go:117 level=info collector=tapestats
Jan 23 11:39:16 compute-0 node_exporter[197923]: ts=2026-01-23T11:39:16.191Z caller=node_exporter.go:117 level=info collector=udp_queues
Jan 23 11:39:16 compute-0 node_exporter[197923]: ts=2026-01-23T11:39:16.191Z caller=node_exporter.go:117 level=info collector=vmstat
Jan 23 11:39:16 compute-0 node_exporter[197923]: ts=2026-01-23T11:39:16.191Z caller=node_exporter.go:117 level=info collector=xfs
Jan 23 11:39:16 compute-0 node_exporter[197923]: ts=2026-01-23T11:39:16.191Z caller=node_exporter.go:117 level=info collector=zfs
Jan 23 11:39:16 compute-0 node_exporter[197923]: ts=2026-01-23T11:39:16.192Z caller=tls_config.go:232 level=info msg="Listening on" address=[::]:9100
Jan 23 11:39:16 compute-0 node_exporter[197923]: ts=2026-01-23T11:39:16.192Z caller=tls_config.go:268 level=info msg="TLS is enabled." http2=true address=[::]:9100
Jan 23 11:39:16 compute-0 podman[197908]: 2026-01-23 11:39:16.201471196 +0000 UTC m=+0.164770267 container start 99ee297e6e25b500e7af118e58bbafc761d2fd7202cdfcf4c976c2a99866b5ef (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 23 11:39:16 compute-0 podman[197908]: node_exporter
Jan 23 11:39:16 compute-0 systemd[1]: Started node_exporter container.
Jan 23 11:39:16 compute-0 sudo[197866]: pam_unix(sudo:session): session closed for user root
Jan 23 11:39:16 compute-0 podman[197932]: 2026-01-23 11:39:16.273185572 +0000 UTC m=+0.052930087 container health_status 99ee297e6e25b500e7af118e58bbafc761d2fd7202cdfcf4c976c2a99866b5ef (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 23 11:39:17 compute-0 python3.9[198106]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 23 11:39:17 compute-0 sudo[198256]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fhablcnhrkewdjjmhazjtgvgwgjowduy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168357.609918-730-246664458974958/AnsiballZ_stat.py'
Jan 23 11:39:17 compute-0 sudo[198256]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:39:18 compute-0 python3.9[198258]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:39:18 compute-0 sudo[198256]: pam_unix(sudo:session): session closed for user root
Jan 23 11:39:18 compute-0 sudo[198381]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gzwohtggjqirdpmgqklqmfwllsyjzqry ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168357.609918-730-246664458974958/AnsiballZ_copy.py'
Jan 23 11:39:18 compute-0 sudo[198381]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:39:18 compute-0 python3.9[198383]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769168357.609918-730-246664458974958/.source.yaml _original_basename=._vwa2t_9 follow=False checksum=95967e15f4fcb3d3fe56a4573f722d1f6efbbfce backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:39:18 compute-0 sudo[198381]: pam_unix(sudo:session): session closed for user root
Jan 23 11:39:19 compute-0 sudo[198533]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jigsnhpakftsuadtcqzrgojyuvutddmu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168358.8603792-745-187659660862599/AnsiballZ_stat.py'
Jan 23 11:39:19 compute-0 sudo[198533]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:39:19 compute-0 python3.9[198535]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/podman_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:39:19 compute-0 sudo[198533]: pam_unix(sudo:session): session closed for user root
Jan 23 11:39:19 compute-0 sudo[198656]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yygdqhuesjjbxfumtlmgxtphinaoddgl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168358.8603792-745-187659660862599/AnsiballZ_copy.py'
Jan 23 11:39:19 compute-0 sudo[198656]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:39:19 compute-0 python3.9[198658]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/podman_exporter/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769168358.8603792-745-187659660862599/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 23 11:39:19 compute-0 sudo[198656]: pam_unix(sudo:session): session closed for user root
Jan 23 11:39:20 compute-0 sudo[198808]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qinaughthwedykaphquslgdtstcljteh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168360.3995893-766-128904225441088/AnsiballZ_file.py'
Jan 23 11:39:20 compute-0 sudo[198808]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:39:20 compute-0 python3.9[198810]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:39:20 compute-0 sudo[198808]: pam_unix(sudo:session): session closed for user root
Jan 23 11:39:21 compute-0 sudo[198960]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cnzsyedtvjcueyiyjrogjnuhhsjsdmxv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168361.001759-774-160474684687321/AnsiballZ_file.py'
Jan 23 11:39:21 compute-0 sudo[198960]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:39:21 compute-0 python3.9[198962]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 23 11:39:21 compute-0 sudo[198960]: pam_unix(sudo:session): session closed for user root
Jan 23 11:39:21 compute-0 sudo[199112]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rfegwiquttmphulmldepmihtgndvxbxz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168361.6677024-782-146646265041344/AnsiballZ_stat.py'
Jan 23 11:39:21 compute-0 sudo[199112]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:39:22 compute-0 python3.9[199114]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ceilometer_agent_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:39:22 compute-0 sudo[199112]: pam_unix(sudo:session): session closed for user root
Jan 23 11:39:22 compute-0 sudo[199190]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-athpzzimtrizxsphemarfygnhwkybtgu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168361.6677024-782-146646265041344/AnsiballZ_file.py'
Jan 23 11:39:22 compute-0 sudo[199190]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:39:22 compute-0 python3.9[199192]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/var/lib/kolla/config_files/ceilometer_agent_compute.json _original_basename=.tcxt4dkd recurse=False state=file path=/var/lib/kolla/config_files/ceilometer_agent_compute.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:39:22 compute-0 sudo[199190]: pam_unix(sudo:session): session closed for user root
Jan 23 11:39:23 compute-0 python3.9[199342]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/podman_exporter state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:39:25 compute-0 sudo[199763]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wqxwdktkwexpfdojmuoktbtfacfmmpzx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168364.8384376-819-225472859142444/AnsiballZ_container_config_data.py'
Jan 23 11:39:25 compute-0 sudo[199763]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:39:25 compute-0 python3.9[199765]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/podman_exporter config_pattern=*.json debug=False
Jan 23 11:39:25 compute-0 sudo[199763]: pam_unix(sudo:session): session closed for user root
Jan 23 11:39:25 compute-0 sudo[199915]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uwvdmqyxxnfhhrbrtvtdfdxlpjdoiaot ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168365.647821-830-270745277311816/AnsiballZ_container_config_hash.py'
Jan 23 11:39:25 compute-0 sudo[199915]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:39:26 compute-0 python3.9[199917]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 23 11:39:26 compute-0 sudo[199915]: pam_unix(sudo:session): session closed for user root
Jan 23 11:39:26 compute-0 sudo[200067]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hmbxigvmqzphorfflslsdnpinibolbho ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769168366.549333-840-213773595604074/AnsiballZ_edpm_container_manage.py'
Jan 23 11:39:26 compute-0 sudo[200067]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:39:27 compute-0 python3[200069]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/podman_exporter config_id=podman_exporter config_overrides={} config_patterns=*.json containers=['podman_exporter'] log_base_path=/var/log/containers/stdouts debug=False
Jan 23 11:39:28 compute-0 podman[200082]: 2026-01-23 11:39:28.534831851 +0000 UTC m=+1.329606575 image pull e56d40e393eb5ea8704d9af8cf0d74665df83747106713fda91530f201837815 quay.io/navidys/prometheus-podman-exporter:v1.10.1
Jan 23 11:39:28 compute-0 podman[200179]: 2026-01-23 11:39:28.650474266 +0000 UTC m=+0.044501415 container create 48bfd3e93cfb033a8917f154ab637a84f3f60f7609564292c230ce848bae7693 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, config_id=podman_exporter, container_name=podman_exporter, managed_by=edpm_ansible)
Jan 23 11:39:28 compute-0 podman[200179]: 2026-01-23 11:39:28.625391048 +0000 UTC m=+0.019418237 image pull e56d40e393eb5ea8704d9af8cf0d74665df83747106713fda91530f201837815 quay.io/navidys/prometheus-podman-exporter:v1.10.1
Jan 23 11:39:28 compute-0 python3[200069]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name podman_exporter --conmon-pidfile /run/podman_exporter.pid --env CONTAINER_HOST=unix:///run/podman/podman.sock --env OS_ENDPOINT_TYPE=internal --env EDPM_CONFIG_HASH=477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595 --healthcheck-command /openstack/healthcheck podman_exporter --label config_id=podman_exporter --label container_name=podman_exporter --label managed_by=edpm_ansible --label config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9882:9882 --user root --volume /var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z --volume /run/podman/podman.sock:/run/podman/podman.sock:rw,z --volume /var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z quay.io/navidys/prometheus-podman-exporter:v1.10.1 --web.config.file=/etc/podman_exporter/podman_exporter.yaml
Jan 23 11:39:28 compute-0 sudo[200067]: pam_unix(sudo:session): session closed for user root
Jan 23 11:39:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:39:29.078 106832 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 11:39:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:39:29.079 106832 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 11:39:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:39:29.079 106832 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 11:39:29 compute-0 sudo[200366]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tseiwuqlclogunhtvvhvubmyoxfrdmgl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168368.9420626-848-116505902229924/AnsiballZ_stat.py'
Jan 23 11:39:29 compute-0 sudo[200366]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:39:29 compute-0 python3.9[200368]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 11:39:29 compute-0 sudo[200366]: pam_unix(sudo:session): session closed for user root
Jan 23 11:39:29 compute-0 sudo[200520]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aotszxwhrevtfhotquzabvmofyhcvhka ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168369.6499612-857-228888971459099/AnsiballZ_file.py'
Jan 23 11:39:29 compute-0 sudo[200520]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:39:30 compute-0 python3.9[200522]: ansible-file Invoked with path=/etc/systemd/system/edpm_podman_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:39:30 compute-0 sudo[200520]: pam_unix(sudo:session): session closed for user root
Jan 23 11:39:30 compute-0 sudo[200596]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-huuvfndltfthposcyisowrpusbmruufs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168369.6499612-857-228888971459099/AnsiballZ_stat.py'
Jan 23 11:39:30 compute-0 sudo[200596]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:39:30 compute-0 podman[200598]: 2026-01-23 11:39:30.419010062 +0000 UTC m=+0.053860429 container health_status 6ec039018dddd109dd56b3f3912ce4a80c166b5fb98c417c5e3cfbbdfbfbeaad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=starting, health_failing_streak=2, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20260120, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=93ecf842527b95c82e14fba92451bd07)
Jan 23 11:39:30 compute-0 systemd[1]: 6ec039018dddd109dd56b3f3912ce4a80c166b5fb98c417c5e3cfbbdfbfbeaad-2862708b4ef1e2fd.service: Main process exited, code=exited, status=1/FAILURE
Jan 23 11:39:30 compute-0 systemd[1]: 6ec039018dddd109dd56b3f3912ce4a80c166b5fb98c417c5e3cfbbdfbfbeaad-2862708b4ef1e2fd.service: Failed with result 'exit-code'.
Jan 23 11:39:30 compute-0 python3.9[200599]: ansible-stat Invoked with path=/etc/systemd/system/edpm_podman_exporter_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 11:39:30 compute-0 sudo[200596]: pam_unix(sudo:session): session closed for user root
Jan 23 11:39:30 compute-0 sudo[200764]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hpqwrvhkdpudhrlbvcofnsuzugrazavi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168370.6008704-857-218101210982740/AnsiballZ_copy.py'
Jan 23 11:39:31 compute-0 sudo[200764]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:39:31 compute-0 python3.9[200766]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769168370.6008704-857-218101210982740/source dest=/etc/systemd/system/edpm_podman_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:39:31 compute-0 sudo[200764]: pam_unix(sudo:session): session closed for user root
Jan 23 11:39:31 compute-0 sudo[200840]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aessydsgjxfwahcplxzvjbtcsgegcgis ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168370.6008704-857-218101210982740/AnsiballZ_systemd.py'
Jan 23 11:39:31 compute-0 sudo[200840]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:39:31 compute-0 python3.9[200842]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 23 11:39:31 compute-0 systemd[1]: Reloading.
Jan 23 11:39:31 compute-0 systemd-rc-local-generator[200871]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 11:39:31 compute-0 systemd-sysv-generator[200875]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 11:39:32 compute-0 sudo[200840]: pam_unix(sudo:session): session closed for user root
Jan 23 11:39:32 compute-0 sudo[200952]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kbkqyeewauociajxcdprbsgiacrwesiw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168370.6008704-857-218101210982740/AnsiballZ_systemd.py'
Jan 23 11:39:32 compute-0 sudo[200952]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:39:32 compute-0 python3.9[200954]: ansible-systemd Invoked with state=restarted name=edpm_podman_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 11:39:32 compute-0 systemd[1]: Reloading.
Jan 23 11:39:32 compute-0 systemd-rc-local-generator[200988]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 11:39:32 compute-0 systemd-sysv-generator[200991]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 11:39:33 compute-0 systemd[1]: Starting podman_exporter container...
Jan 23 11:39:33 compute-0 systemd[1]: Started libcrun container.
Jan 23 11:39:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/590b06a1a144aa1c3ab0fb135282ceb7d9457d74e0d856efb634d8b24efb02c6/merged/etc/podman_exporter/podman_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Jan 23 11:39:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/590b06a1a144aa1c3ab0fb135282ceb7d9457d74e0d856efb634d8b24efb02c6/merged/etc/podman_exporter/tls supports timestamps until 2038 (0x7fffffff)
Jan 23 11:39:33 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 48bfd3e93cfb033a8917f154ab637a84f3f60f7609564292c230ce848bae7693.
Jan 23 11:39:33 compute-0 podman[200995]: 2026-01-23 11:39:33.238389111 +0000 UTC m=+0.112517518 container init 48bfd3e93cfb033a8917f154ab637a84f3f60f7609564292c230ce848bae7693 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 23 11:39:33 compute-0 podman_exporter[201011]: ts=2026-01-23T11:39:33.254Z caller=exporter.go:68 level=info msg="Starting podman-prometheus-exporter" version="(version=1.10.1, branch=HEAD, revision=1)"
Jan 23 11:39:33 compute-0 podman_exporter[201011]: ts=2026-01-23T11:39:33.254Z caller=exporter.go:69 level=info msg=metrics enhanced=false
Jan 23 11:39:33 compute-0 podman_exporter[201011]: ts=2026-01-23T11:39:33.254Z caller=handler.go:94 level=info msg="enabled collectors"
Jan 23 11:39:33 compute-0 podman_exporter[201011]: ts=2026-01-23T11:39:33.254Z caller=handler.go:105 level=info collector=container
Jan 23 11:39:33 compute-0 podman[200995]: 2026-01-23 11:39:33.267772827 +0000 UTC m=+0.141901234 container start 48bfd3e93cfb033a8917f154ab637a84f3f60f7609564292c230ce848bae7693 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 23 11:39:33 compute-0 podman[200995]: podman_exporter
Jan 23 11:39:33 compute-0 systemd[1]: Starting Podman API Service...
Jan 23 11:39:33 compute-0 systemd[1]: Started Podman API Service.
Jan 23 11:39:33 compute-0 systemd[1]: Started podman_exporter container.
Jan 23 11:39:33 compute-0 podman[201022]: time="2026-01-23T11:39:33Z" level=info msg="/usr/bin/podman filtering at log level info"
Jan 23 11:39:33 compute-0 podman[201022]: time="2026-01-23T11:39:33Z" level=info msg="Setting parallel job count to 25"
Jan 23 11:39:33 compute-0 podman[201022]: time="2026-01-23T11:39:33Z" level=info msg="Using sqlite as database backend"
Jan 23 11:39:33 compute-0 podman[201022]: time="2026-01-23T11:39:33Z" level=info msg="Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled"
Jan 23 11:39:33 compute-0 podman[201022]: time="2026-01-23T11:39:33Z" level=info msg="Using systemd socket activation to determine API endpoint"
Jan 23 11:39:33 compute-0 podman[201022]: time="2026-01-23T11:39:33Z" level=info msg="API service listening on \"/run/podman/podman.sock\". URI: \"unix:///run/podman/podman.sock\""
Jan 23 11:39:33 compute-0 podman[201022]: @ - - [23/Jan/2026:11:39:33 +0000] "GET /v4.9.3/libpod/_ping HTTP/1.1" 200 2 "" "Go-http-client/1.1"
Jan 23 11:39:33 compute-0 sudo[200952]: pam_unix(sudo:session): session closed for user root
Jan 23 11:39:33 compute-0 podman[201022]: time="2026-01-23T11:39:33Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 23 11:39:33 compute-0 podman[201020]: 2026-01-23 11:39:33.324781644 +0000 UTC m=+0.045939271 container health_status 48bfd3e93cfb033a8917f154ab637a84f3f60f7609564292c230ce848bae7693 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=starting, health_failing_streak=1, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Jan 23 11:39:33 compute-0 podman[201022]: @ - - [23/Jan/2026:11:39:33 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=true&sync=false HTTP/1.1" 200 18095 "" "Go-http-client/1.1"
Jan 23 11:39:33 compute-0 podman_exporter[201011]: ts=2026-01-23T11:39:33.328Z caller=exporter.go:96 level=info msg="Listening on" address=:9882
Jan 23 11:39:33 compute-0 podman_exporter[201011]: ts=2026-01-23T11:39:33.329Z caller=tls_config.go:313 level=info msg="Listening on" address=[::]:9882
Jan 23 11:39:33 compute-0 systemd[1]: 48bfd3e93cfb033a8917f154ab637a84f3f60f7609564292c230ce848bae7693-81a87e5f3d9f05.service: Main process exited, code=exited, status=1/FAILURE
Jan 23 11:39:33 compute-0 systemd[1]: 48bfd3e93cfb033a8917f154ab637a84f3f60f7609564292c230ce848bae7693-81a87e5f3d9f05.service: Failed with result 'exit-code'.
Jan 23 11:39:33 compute-0 podman_exporter[201011]: ts=2026-01-23T11:39:33.329Z caller=tls_config.go:349 level=info msg="TLS is enabled." http2=true address=[::]:9882
Jan 23 11:39:33 compute-0 python3.9[201207]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 23 11:39:34 compute-0 sudo[201368]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-urxbkuoztakugrckglyxvzggiajtfrdq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168374.4505334-902-134348779244678/AnsiballZ_stat.py'
Jan 23 11:39:34 compute-0 sudo[201368]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:39:34 compute-0 podman[201331]: 2026-01-23 11:39:34.779954594 +0000 UTC m=+0.061769218 container health_status d96827cd9c29e53bbdf4cef10942608e4ba405294733072b4aa624c0238e2ed8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 23 11:39:34 compute-0 python3.9[201376]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:39:35 compute-0 sudo[201368]: pam_unix(sudo:session): session closed for user root
Jan 23 11:39:35 compute-0 sudo[201501]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tnzwsnwlklkrjnlsdjwijlyedlwgorlc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168374.4505334-902-134348779244678/AnsiballZ_copy.py'
Jan 23 11:39:35 compute-0 sudo[201501]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:39:35 compute-0 python3.9[201503]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769168374.4505334-902-134348779244678/.source.yaml _original_basename=.irhmzbhe follow=False checksum=55ddcc834a3d98a66f20e6fb912dd77fd696772c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:39:35 compute-0 sudo[201501]: pam_unix(sudo:session): session closed for user root
Jan 23 11:39:36 compute-0 sudo[201664]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmkmxbfztnwjadeqzocmqzhqszjetgwv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168376.0975065-917-217762959558316/AnsiballZ_stat.py'
Jan 23 11:39:36 compute-0 sudo[201664]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:39:36 compute-0 podman[201627]: 2026-01-23 11:39:36.642330118 +0000 UTC m=+0.100959719 container health_status 1cc877fed4914980324cf4c0d6ba23743fd113442cee4d49cc1a59e402757170 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 11:39:36 compute-0 python3.9[201671]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/openstack_network_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:39:36 compute-0 sudo[201664]: pam_unix(sudo:session): session closed for user root
Jan 23 11:39:37 compute-0 sudo[201803]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ibcsseczhtuwqqqjuneuqwzhujakpjay ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168376.0975065-917-217762959558316/AnsiballZ_copy.py'
Jan 23 11:39:37 compute-0 sudo[201803]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:39:37 compute-0 python3.9[201805]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/openstack_network_exporter/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769168376.0975065-917-217762959558316/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 23 11:39:37 compute-0 sudo[201803]: pam_unix(sudo:session): session closed for user root
Jan 23 11:39:37 compute-0 auditd[703]: Audit daemon rotating log files
Jan 23 11:39:38 compute-0 sudo[201955]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vsbnayatmpbrfvbpvgejllyrrkhheadd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168377.9302802-938-14058810267391/AnsiballZ_file.py'
Jan 23 11:39:38 compute-0 sudo[201955]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:39:38 compute-0 python3.9[201957]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:39:38 compute-0 sudo[201955]: pam_unix(sudo:session): session closed for user root
Jan 23 11:39:38 compute-0 sudo[202107]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bkosvizdkcpatzhqiimqxbnikwdwyiou ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168378.6944623-946-203917743565758/AnsiballZ_file.py'
Jan 23 11:39:38 compute-0 sudo[202107]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:39:39 compute-0 python3.9[202109]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 23 11:39:39 compute-0 sudo[202107]: pam_unix(sudo:session): session closed for user root
Jan 23 11:39:39 compute-0 sudo[202259]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-utekenjzxbyypaxzucoorcqwiicsfmbe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168379.4148347-954-259444685982322/AnsiballZ_stat.py'
Jan 23 11:39:39 compute-0 sudo[202259]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:39:39 compute-0 python3.9[202261]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ceilometer_agent_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:39:39 compute-0 sudo[202259]: pam_unix(sudo:session): session closed for user root
Jan 23 11:39:40 compute-0 sudo[202337]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gubgsyruhoguxkylfnliepkpnquglmmm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168379.4148347-954-259444685982322/AnsiballZ_file.py'
Jan 23 11:39:40 compute-0 sudo[202337]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:39:40 compute-0 python3.9[202339]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/var/lib/kolla/config_files/ceilometer_agent_compute.json _original_basename=.zk5rh6t6 recurse=False state=file path=/var/lib/kolla/config_files/ceilometer_agent_compute.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:39:40 compute-0 sudo[202337]: pam_unix(sudo:session): session closed for user root
Jan 23 11:39:41 compute-0 python3.9[202489]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/openstack_network_exporter state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:39:42 compute-0 sudo[202910]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-azalcurfmasfnalbntnisvtmctcodzyv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168382.6269033-991-138602199323392/AnsiballZ_container_config_data.py'
Jan 23 11:39:42 compute-0 sudo[202910]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:39:43 compute-0 python3.9[202912]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/openstack_network_exporter config_pattern=*.json debug=False
Jan 23 11:39:43 compute-0 sudo[202910]: pam_unix(sudo:session): session closed for user root
Jan 23 11:39:43 compute-0 sudo[203062]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ifvbfwlkofqspiislaeonbzxovgusoro ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168383.4501204-1002-68780477549093/AnsiballZ_container_config_hash.py'
Jan 23 11:39:43 compute-0 sudo[203062]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:39:43 compute-0 python3.9[203064]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 23 11:39:43 compute-0 sudo[203062]: pam_unix(sudo:session): session closed for user root
Jan 23 11:39:44 compute-0 sudo[203214]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qlkfiuzaedlmqgxluhlfazaotxevimgw ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769168384.2194939-1012-89698135007449/AnsiballZ_edpm_container_manage.py'
Jan 23 11:39:44 compute-0 sudo[203214]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:39:44 compute-0 python3[203216]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/openstack_network_exporter config_id=openstack_network_exporter config_overrides={} config_patterns=*.json containers=['openstack_network_exporter'] log_base_path=/var/log/containers/stdouts debug=False
Jan 23 11:39:46 compute-0 podman[203271]: 2026-01-23 11:39:46.779117095 +0000 UTC m=+0.102028375 container health_status 99ee297e6e25b500e7af118e58bbafc761d2fd7202cdfcf4c976c2a99866b5ef (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 23 11:39:47 compute-0 podman[203227]: 2026-01-23 11:39:47.032158974 +0000 UTC m=+2.227915371 image pull 186c5e97c6f6912533851a0044ea6da23938910e7bddfb4a6c0be9b48ab2a1d1 quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified
Jan 23 11:39:47 compute-0 podman[203349]: 2026-01-23 11:39:47.166314213 +0000 UTC m=+0.047712401 container create cde20f10ae383cce1365a41265bac0a75ea71c31a21a1539f187bef9d678e8d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, config_id=openstack_network_exporter, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, architecture=x86_64, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, maintainer=Red Hat, Inc., managed_by=edpm_ansible, distribution-scope=public, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, vcs-type=git, container_name=openstack_network_exporter, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Jan 23 11:39:47 compute-0 podman[203349]: 2026-01-23 11:39:47.14154645 +0000 UTC m=+0.022944728 image pull 186c5e97c6f6912533851a0044ea6da23938910e7bddfb4a6c0be9b48ab2a1d1 quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified
Jan 23 11:39:47 compute-0 python3[203216]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name openstack_network_exporter --conmon-pidfile /run/openstack_network_exporter.pid --env OPENSTACK_NETWORK_EXPORTER_YAML=/etc/openstack_network_exporter/openstack_network_exporter.yaml --env OS_ENDPOINT_TYPE=internal --env EDPM_CONFIG_HASH=477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595 --healthcheck-command /openstack/healthcheck openstack-netwo --label config_id=openstack_network_exporter --label container_name=openstack_network_exporter --label managed_by=edpm_ansible --label config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9105:9105 --volume /var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z --volume /var/run/openvswitch:/run/openvswitch:rw,z --volume /var/lib/openvswitch/ovn:/run/ovn:rw,z --volume /proc:/host/proc:ro --volume /var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified
Jan 23 11:39:47 compute-0 sudo[203214]: pam_unix(sudo:session): session closed for user root
Jan 23 11:39:47 compute-0 sudo[203534]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jigzhtihfeywlvnvsojbdsouhdclzrpj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168387.556778-1020-75180590443436/AnsiballZ_stat.py'
Jan 23 11:39:47 compute-0 sudo[203534]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:39:48 compute-0 python3.9[203536]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 11:39:48 compute-0 sudo[203534]: pam_unix(sudo:session): session closed for user root
Jan 23 11:39:48 compute-0 sudo[203688]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-brelkzsgtmibhoogvvwzzlbzokqvkzni ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168388.3598068-1029-108758216208789/AnsiballZ_file.py'
Jan 23 11:39:48 compute-0 sudo[203688]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:39:48 compute-0 python3.9[203690]: ansible-file Invoked with path=/etc/systemd/system/edpm_openstack_network_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:39:48 compute-0 sudo[203688]: pam_unix(sudo:session): session closed for user root
Jan 23 11:39:49 compute-0 sudo[203764]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vrgczdldbqplyfrzuuqlnyiorrzrzufu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168388.3598068-1029-108758216208789/AnsiballZ_stat.py'
Jan 23 11:39:49 compute-0 sudo[203764]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:39:49 compute-0 python3.9[203766]: ansible-stat Invoked with path=/etc/systemd/system/edpm_openstack_network_exporter_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 11:39:49 compute-0 sudo[203764]: pam_unix(sudo:session): session closed for user root
Jan 23 11:39:49 compute-0 sudo[203915]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ougvalvmrgxwrvsgwyuitqshiyugsmec ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168389.3891218-1029-229744294570989/AnsiballZ_copy.py'
Jan 23 11:39:49 compute-0 sudo[203915]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:39:50 compute-0 python3.9[203917]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769168389.3891218-1029-229744294570989/source dest=/etc/systemd/system/edpm_openstack_network_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:39:50 compute-0 sudo[203915]: pam_unix(sudo:session): session closed for user root
Jan 23 11:39:50 compute-0 sudo[203991]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fsjjesurjqxqwduendqssyfkcgtazeqt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168389.3891218-1029-229744294570989/AnsiballZ_systemd.py'
Jan 23 11:39:50 compute-0 sudo[203991]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:39:50 compute-0 python3.9[203993]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 23 11:39:50 compute-0 systemd[1]: Reloading.
Jan 23 11:39:50 compute-0 systemd-sysv-generator[204024]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 11:39:50 compute-0 systemd-rc-local-generator[204020]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 11:39:51 compute-0 sudo[203991]: pam_unix(sudo:session): session closed for user root
Jan 23 11:39:51 compute-0 sudo[204102]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iswjcrvhxsvdvaelgpgnypmtpwdpylho ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168389.3891218-1029-229744294570989/AnsiballZ_systemd.py'
Jan 23 11:39:51 compute-0 sudo[204102]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:39:51 compute-0 python3.9[204104]: ansible-systemd Invoked with state=restarted name=edpm_openstack_network_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 11:39:51 compute-0 systemd[1]: Reloading.
Jan 23 11:39:51 compute-0 systemd-rc-local-generator[204133]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 11:39:51 compute-0 systemd-sysv-generator[204138]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 11:39:51 compute-0 systemd[1]: Starting openstack_network_exporter container...
Jan 23 11:39:52 compute-0 systemd[1]: Started libcrun container.
Jan 23 11:39:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b55398ce91527fc465019d274d3bd7217c88e10a96a80a5384b3c754f58f8d0e/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Jan 23 11:39:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b55398ce91527fc465019d274d3bd7217c88e10a96a80a5384b3c754f58f8d0e/merged/etc/openstack_network_exporter/tls supports timestamps until 2038 (0x7fffffff)
Jan 23 11:39:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b55398ce91527fc465019d274d3bd7217c88e10a96a80a5384b3c754f58f8d0e/merged/etc/openstack_network_exporter/openstack_network_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Jan 23 11:39:52 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run cde20f10ae383cce1365a41265bac0a75ea71c31a21a1539f187bef9d678e8d7.
Jan 23 11:39:52 compute-0 podman[204144]: 2026-01-23 11:39:52.191493919 +0000 UTC m=+0.171484883 container init cde20f10ae383cce1365a41265bac0a75ea71c31a21a1539f187bef9d678e8d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, managed_by=edpm_ansible, architecture=x86_64, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=openstack_network_exporter, distribution-scope=public, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, vendor=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, release=1755695350, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9)
Jan 23 11:39:52 compute-0 openstack_network_exporter[204160]: INFO    11:39:52 main.go:48: registering *bridge.Collector
Jan 23 11:39:52 compute-0 openstack_network_exporter[204160]: INFO    11:39:52 main.go:48: registering *coverage.Collector
Jan 23 11:39:52 compute-0 openstack_network_exporter[204160]: INFO    11:39:52 main.go:48: registering *datapath.Collector
Jan 23 11:39:52 compute-0 openstack_network_exporter[204160]: INFO    11:39:52 main.go:48: registering *iface.Collector
Jan 23 11:39:52 compute-0 openstack_network_exporter[204160]: INFO    11:39:52 main.go:48: registering *memory.Collector
Jan 23 11:39:52 compute-0 openstack_network_exporter[204160]: INFO    11:39:52 main.go:55: *ovnnorthd.Collector not registered, metric set not enabled
Jan 23 11:39:52 compute-0 openstack_network_exporter[204160]: INFO    11:39:52 main.go:48: registering *ovn.Collector
Jan 23 11:39:52 compute-0 openstack_network_exporter[204160]: INFO    11:39:52 main.go:55: *ovsdbserver.Collector not registered, metric set not enabled
Jan 23 11:39:52 compute-0 openstack_network_exporter[204160]: INFO    11:39:52 main.go:48: registering *pmd_perf.Collector
Jan 23 11:39:52 compute-0 openstack_network_exporter[204160]: INFO    11:39:52 main.go:48: registering *pmd_rxq.Collector
Jan 23 11:39:52 compute-0 openstack_network_exporter[204160]: INFO    11:39:52 main.go:48: registering *vswitch.Collector
Jan 23 11:39:52 compute-0 openstack_network_exporter[204160]: NOTICE  11:39:52 main.go:76: listening on https://:9105/metrics
Jan 23 11:39:52 compute-0 podman[204144]: 2026-01-23 11:39:52.240254596 +0000 UTC m=+0.220245530 container start cde20f10ae383cce1365a41265bac0a75ea71c31a21a1539f187bef9d678e8d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, managed_by=edpm_ansible, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., config_id=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, release=1755695350, build-date=2025-08-20T13:12:41, version=9.6)
Jan 23 11:39:52 compute-0 podman[204144]: openstack_network_exporter
Jan 23 11:39:52 compute-0 systemd[1]: Started openstack_network_exporter container.
Jan 23 11:39:52 compute-0 sudo[204102]: pam_unix(sudo:session): session closed for user root
Jan 23 11:39:52 compute-0 podman[204170]: 2026-01-23 11:39:52.358024399 +0000 UTC m=+0.108476234 container health_status cde20f10ae383cce1365a41265bac0a75ea71c31a21a1539f187bef9d678e8d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., name=ubi9-minimal, vcs-type=git, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=openstack_network_exporter, managed_by=edpm_ansible, release=1755695350, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 23 11:39:53 compute-0 python3.9[204341]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 23 11:39:53 compute-0 sudo[204491]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-visifypgjiwxqmbnajgjddfhqnranewk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168393.5672548-1074-228932395228163/AnsiballZ_stat.py'
Jan 23 11:39:53 compute-0 sudo[204491]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:39:54 compute-0 python3.9[204493]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:39:54 compute-0 sudo[204491]: pam_unix(sudo:session): session closed for user root
Jan 23 11:39:54 compute-0 sudo[204616]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ttfnznoccbyftmfivgjkpxdmtjycbirq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168393.5672548-1074-228932395228163/AnsiballZ_copy.py'
Jan 23 11:39:54 compute-0 sudo[204616]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:39:54 compute-0 python3.9[204618]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769168393.5672548-1074-228932395228163/.source.yaml _original_basename=.vy9hgocf follow=False checksum=a165f132a6ae6124f098e3297395b6c608af80af backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:39:54 compute-0 sudo[204616]: pam_unix(sudo:session): session closed for user root
Jan 23 11:39:55 compute-0 sudo[204768]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-svcxozmznbscveovapshmqofaptebkqr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168394.7677543-1089-199976079195489/AnsiballZ_find.py'
Jan 23 11:39:55 compute-0 sudo[204768]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:39:55 compute-0 python3.9[204770]: ansible-ansible.builtin.find Invoked with file_type=directory paths=['/var/lib/openstack/healthchecks/'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 23 11:39:55 compute-0 sudo[204768]: pam_unix(sudo:session): session closed for user root
Jan 23 11:39:55 compute-0 sudo[204920]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xsmettnbafpixowhqkbercukwcgvmvdy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168395.5330746-1099-155662432323159/AnsiballZ_podman_container_info.py'
Jan 23 11:39:55 compute-0 sudo[204920]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:39:56 compute-0 python3.9[204922]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_controller'] executable=podman
Jan 23 11:39:56 compute-0 sudo[204920]: pam_unix(sudo:session): session closed for user root
Jan 23 11:39:56 compute-0 sudo[205085]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zxdqddgpbrnrsqlidqkgzdbiwevtbnwp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168396.4388382-1107-71607838055361/AnsiballZ_podman_container_exec.py'
Jan 23 11:39:56 compute-0 sudo[205085]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:39:57 compute-0 python3.9[205087]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 23 11:39:57 compute-0 systemd[1]: Started libpod-conmon-1cc877fed4914980324cf4c0d6ba23743fd113442cee4d49cc1a59e402757170.scope.
Jan 23 11:39:57 compute-0 podman[205088]: 2026-01-23 11:39:57.225524283 +0000 UTC m=+0.115427206 container exec 1cc877fed4914980324cf4c0d6ba23743fd113442cee4d49cc1a59e402757170 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 11:39:57 compute-0 podman[205088]: 2026-01-23 11:39:57.259101974 +0000 UTC m=+0.149004927 container exec_died 1cc877fed4914980324cf4c0d6ba23743fd113442cee4d49cc1a59e402757170 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 11:39:57 compute-0 systemd[1]: libpod-conmon-1cc877fed4914980324cf4c0d6ba23743fd113442cee4d49cc1a59e402757170.scope: Deactivated successfully.
Jan 23 11:39:57 compute-0 sudo[205085]: pam_unix(sudo:session): session closed for user root
Jan 23 11:39:57 compute-0 sudo[205269]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uhfwtbzurkpngocgyyvzmopxwakcobbl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168397.508181-1115-100481584055717/AnsiballZ_podman_container_exec.py'
Jan 23 11:39:57 compute-0 sudo[205269]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:39:57 compute-0 python3.9[205271]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 23 11:39:58 compute-0 systemd[1]: Started libpod-conmon-1cc877fed4914980324cf4c0d6ba23743fd113442cee4d49cc1a59e402757170.scope.
Jan 23 11:39:58 compute-0 podman[205272]: 2026-01-23 11:39:58.079092368 +0000 UTC m=+0.079583100 container exec 1cc877fed4914980324cf4c0d6ba23743fd113442cee4d49cc1a59e402757170 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Jan 23 11:39:58 compute-0 podman[205272]: 2026-01-23 11:39:58.116564975 +0000 UTC m=+0.117055617 container exec_died 1cc877fed4914980324cf4c0d6ba23743fd113442cee4d49cc1a59e402757170 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 23 11:39:58 compute-0 systemd[1]: libpod-conmon-1cc877fed4914980324cf4c0d6ba23743fd113442cee4d49cc1a59e402757170.scope: Deactivated successfully.
Jan 23 11:39:58 compute-0 sudo[205269]: pam_unix(sudo:session): session closed for user root
Jan 23 11:39:58 compute-0 sudo[205452]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ywtowvjjvfdufjjiizrsjtrkezxmkcuq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168398.359082-1123-13334404435902/AnsiballZ_file.py'
Jan 23 11:39:58 compute-0 sudo[205452]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:39:58 compute-0 python3.9[205454]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_controller recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:39:58 compute-0 sudo[205452]: pam_unix(sudo:session): session closed for user root
Jan 23 11:39:59 compute-0 sudo[205604]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cnxpkwnjdyqairqdntexzosdsiwpswiz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168399.1091995-1132-50403151884813/AnsiballZ_podman_container_info.py'
Jan 23 11:39:59 compute-0 sudo[205604]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:39:59 compute-0 python3.9[205606]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_metadata_agent'] executable=podman
Jan 23 11:39:59 compute-0 sudo[205604]: pam_unix(sudo:session): session closed for user root
Jan 23 11:40:00 compute-0 sudo[205768]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kwysuigmmwutuucrmnpyuhxskrvrkdsa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168399.7624085-1140-52920030618028/AnsiballZ_podman_container_exec.py'
Jan 23 11:40:00 compute-0 sudo[205768]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:40:00 compute-0 python3.9[205770]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 23 11:40:00 compute-0 systemd[1]: Started libpod-conmon-d96827cd9c29e53bbdf4cef10942608e4ba405294733072b4aa624c0238e2ed8.scope.
Jan 23 11:40:00 compute-0 podman[205771]: 2026-01-23 11:40:00.291019463 +0000 UTC m=+0.068444504 container exec d96827cd9c29e53bbdf4cef10942608e4ba405294733072b4aa624c0238e2ed8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 23 11:40:00 compute-0 podman[205771]: 2026-01-23 11:40:00.321260152 +0000 UTC m=+0.098685193 container exec_died d96827cd9c29e53bbdf4cef10942608e4ba405294733072b4aa624c0238e2ed8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Jan 23 11:40:00 compute-0 systemd[1]: libpod-conmon-d96827cd9c29e53bbdf4cef10942608e4ba405294733072b4aa624c0238e2ed8.scope: Deactivated successfully.
Jan 23 11:40:00 compute-0 sudo[205768]: pam_unix(sudo:session): session closed for user root
Jan 23 11:40:00 compute-0 podman[205878]: 2026-01-23 11:40:00.749874164 +0000 UTC m=+0.067070180 container health_status 6ec039018dddd109dd56b3f3912ce4a80c166b5fb98c417c5e3cfbbdfbfbeaad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=unhealthy, health_failing_streak=3, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ceilometer_agent_compute, org.label-schema.build-date=20260120, org.label-schema.vendor=CentOS, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, tcib_managed=true, container_name=ceilometer_agent_compute, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Jan 23 11:40:00 compute-0 systemd[1]: 6ec039018dddd109dd56b3f3912ce4a80c166b5fb98c417c5e3cfbbdfbfbeaad-2862708b4ef1e2fd.service: Main process exited, code=exited, status=1/FAILURE
Jan 23 11:40:00 compute-0 systemd[1]: 6ec039018dddd109dd56b3f3912ce4a80c166b5fb98c417c5e3cfbbdfbfbeaad-2862708b4ef1e2fd.service: Failed with result 'exit-code'.
Jan 23 11:40:00 compute-0 sudo[205970]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cyilvrplqkricylfimuwzkvjhfxkkqxf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168400.5516489-1148-60317333058186/AnsiballZ_podman_container_exec.py'
Jan 23 11:40:00 compute-0 sudo[205970]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:40:01 compute-0 python3.9[205972]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 23 11:40:01 compute-0 systemd[1]: Started libpod-conmon-d96827cd9c29e53bbdf4cef10942608e4ba405294733072b4aa624c0238e2ed8.scope.
Jan 23 11:40:01 compute-0 podman[205973]: 2026-01-23 11:40:01.172802896 +0000 UTC m=+0.087809573 container exec d96827cd9c29e53bbdf4cef10942608e4ba405294733072b4aa624c0238e2ed8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Jan 23 11:40:01 compute-0 podman[205973]: 2026-01-23 11:40:01.203057854 +0000 UTC m=+0.118064441 container exec_died d96827cd9c29e53bbdf4cef10942608e4ba405294733072b4aa624c0238e2ed8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 23 11:40:01 compute-0 systemd[1]: libpod-conmon-d96827cd9c29e53bbdf4cef10942608e4ba405294733072b4aa624c0238e2ed8.scope: Deactivated successfully.
Jan 23 11:40:01 compute-0 sudo[205970]: pam_unix(sudo:session): session closed for user root
Jan 23 11:40:01 compute-0 nova_compute[185173]: 2026-01-23 11:40:01.581 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:40:01 compute-0 nova_compute[185173]: 2026-01-23 11:40:01.582 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:40:01 compute-0 nova_compute[185173]: 2026-01-23 11:40:01.602 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:40:01 compute-0 nova_compute[185173]: 2026-01-23 11:40:01.603 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:40:01 compute-0 nova_compute[185173]: 2026-01-23 11:40:01.603 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:40:01 compute-0 nova_compute[185173]: 2026-01-23 11:40:01.630 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 11:40:01 compute-0 nova_compute[185173]: 2026-01-23 11:40:01.630 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 11:40:01 compute-0 nova_compute[185173]: 2026-01-23 11:40:01.631 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 11:40:01 compute-0 nova_compute[185173]: 2026-01-23 11:40:01.631 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 23 11:40:01 compute-0 sudo[206155]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xaeukqpfxtfmvacorosvunbwvkqoscvk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168401.43367-1156-70772854251790/AnsiballZ_file.py'
Jan 23 11:40:01 compute-0 sudo[206155]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:40:01 compute-0 nova_compute[185173]: 2026-01-23 11:40:01.806 185177 WARNING nova.virt.libvirt.driver [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 23 11:40:01 compute-0 nova_compute[185173]: 2026-01-23 11:40:01.808 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5884MB free_disk=72.4378547668457GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 23 11:40:01 compute-0 nova_compute[185173]: 2026-01-23 11:40:01.809 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 11:40:01 compute-0 nova_compute[185173]: 2026-01-23 11:40:01.809 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 11:40:01 compute-0 nova_compute[185173]: 2026-01-23 11:40:01.879 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 23 11:40:01 compute-0 nova_compute[185173]: 2026-01-23 11:40:01.880 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 23 11:40:01 compute-0 nova_compute[185173]: 2026-01-23 11:40:01.910 185177 DEBUG nova.compute.provider_tree [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Inventory has not changed in ProviderTree for provider: 77dd020c-2f5c-40b0-b660-8a95a28aabbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 23 11:40:01 compute-0 nova_compute[185173]: 2026-01-23 11:40:01.931 185177 DEBUG nova.scheduler.client.report [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Inventory has not changed for provider 77dd020c-2f5c-40b0-b660-8a95a28aabbd based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 23 11:40:01 compute-0 nova_compute[185173]: 2026-01-23 11:40:01.933 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 23 11:40:01 compute-0 nova_compute[185173]: 2026-01-23 11:40:01.933 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.124s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 11:40:01 compute-0 python3.9[206157]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_metadata_agent recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:40:01 compute-0 sudo[206155]: pam_unix(sudo:session): session closed for user root
Jan 23 11:40:02 compute-0 nova_compute[185173]: 2026-01-23 11:40:02.565 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:40:02 compute-0 nova_compute[185173]: 2026-01-23 11:40:02.566 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 23 11:40:02 compute-0 nova_compute[185173]: 2026-01-23 11:40:02.566 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 23 11:40:02 compute-0 nova_compute[185173]: 2026-01-23 11:40:02.589 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 23 11:40:02 compute-0 nova_compute[185173]: 2026-01-23 11:40:02.589 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:40:02 compute-0 nova_compute[185173]: 2026-01-23 11:40:02.589 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:40:02 compute-0 nova_compute[185173]: 2026-01-23 11:40:02.590 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:40:02 compute-0 sudo[206307]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cadfithtkwbrcqtfqvteidcjlotjnfpx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168402.208362-1165-137293164652616/AnsiballZ_podman_container_info.py'
Jan 23 11:40:02 compute-0 sudo[206307]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:40:02 compute-0 python3.9[206309]: ansible-containers.podman.podman_container_info Invoked with name=['ceilometer_agent_compute'] executable=podman
Jan 23 11:40:02 compute-0 sudo[206307]: pam_unix(sudo:session): session closed for user root
Jan 23 11:40:03 compute-0 nova_compute[185173]: 2026-01-23 11:40:03.234 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:40:03 compute-0 nova_compute[185173]: 2026-01-23 11:40:03.235 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 23 11:40:03 compute-0 sudo[206469]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vqysnnhtyckjjegrresssckeatxazdra ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168403.0940773-1173-202211618096931/AnsiballZ_podman_container_exec.py'
Jan 23 11:40:03 compute-0 sudo[206469]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:40:03 compute-0 podman[206471]: 2026-01-23 11:40:03.513928028 +0000 UTC m=+0.090513820 container health_status 48bfd3e93cfb033a8917f154ab637a84f3f60f7609564292c230ce848bae7693 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 23 11:40:03 compute-0 python3.9[206472]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 23 11:40:03 compute-0 systemd[1]: Started libpod-conmon-6ec039018dddd109dd56b3f3912ce4a80c166b5fb98c417c5e3cfbbdfbfbeaad.scope.
Jan 23 11:40:03 compute-0 podman[206496]: 2026-01-23 11:40:03.735491458 +0000 UTC m=+0.101057420 container exec 6ec039018dddd109dd56b3f3912ce4a80c166b5fb98c417c5e3cfbbdfbfbeaad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260120, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 23 11:40:03 compute-0 podman[206496]: 2026-01-23 11:40:03.769018507 +0000 UTC m=+0.134584449 container exec_died 6ec039018dddd109dd56b3f3912ce4a80c166b5fb98c417c5e3cfbbdfbfbeaad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20260120, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute)
Jan 23 11:40:03 compute-0 sudo[206469]: pam_unix(sudo:session): session closed for user root
Jan 23 11:40:03 compute-0 systemd[1]: libpod-conmon-6ec039018dddd109dd56b3f3912ce4a80c166b5fb98c417c5e3cfbbdfbfbeaad.scope: Deactivated successfully.
Jan 23 11:40:04 compute-0 sudo[206674]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aoqfmyswqztaswwfyogrytuuzhwkoysu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168404.034129-1181-56812600041132/AnsiballZ_podman_container_exec.py'
Jan 23 11:40:04 compute-0 sudo[206674]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:40:04 compute-0 python3.9[206676]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 23 11:40:04 compute-0 systemd[1]: Started libpod-conmon-6ec039018dddd109dd56b3f3912ce4a80c166b5fb98c417c5e3cfbbdfbfbeaad.scope.
Jan 23 11:40:04 compute-0 podman[206677]: 2026-01-23 11:40:04.629996435 +0000 UTC m=+0.094314004 container exec 6ec039018dddd109dd56b3f3912ce4a80c166b5fb98c417c5e3cfbbdfbfbeaad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260120, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Jan 23 11:40:04 compute-0 podman[206677]: 2026-01-23 11:40:04.659795722 +0000 UTC m=+0.124113281 container exec_died 6ec039018dddd109dd56b3f3912ce4a80c166b5fb98c417c5e3cfbbdfbfbeaad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260120, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, managed_by=edpm_ansible, config_id=ceilometer_agent_compute, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 23 11:40:04 compute-0 systemd[1]: libpod-conmon-6ec039018dddd109dd56b3f3912ce4a80c166b5fb98c417c5e3cfbbdfbfbeaad.scope: Deactivated successfully.
Jan 23 11:40:04 compute-0 sudo[206674]: pam_unix(sudo:session): session closed for user root
Jan 23 11:40:05 compute-0 sudo[206868]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aqaowsfeugieztznbjwhcklcwgsdwpru ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168404.9089925-1189-187905495095600/AnsiballZ_file.py'
Jan 23 11:40:05 compute-0 sudo[206868]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:40:05 compute-0 podman[206831]: 2026-01-23 11:40:05.22605301 +0000 UTC m=+0.058122659 container health_status d96827cd9c29e53bbdf4cef10942608e4ba405294733072b4aa624c0238e2ed8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 23 11:40:05 compute-0 python3.9[206878]: ansible-ansible.builtin.file Invoked with group=42405 mode=0700 owner=42405 path=/var/lib/openstack/healthchecks/ceilometer_agent_compute recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:40:05 compute-0 sudo[206868]: pam_unix(sudo:session): session closed for user root
Jan 23 11:40:05 compute-0 sudo[207028]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gicinbmujbufhiybifvkeroexfpuepit ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168405.659316-1198-60423532976638/AnsiballZ_podman_container_info.py'
Jan 23 11:40:05 compute-0 sudo[207028]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:40:06 compute-0 python3.9[207030]: ansible-containers.podman.podman_container_info Invoked with name=['node_exporter'] executable=podman
Jan 23 11:40:06 compute-0 sudo[207028]: pam_unix(sudo:session): session closed for user root
Jan 23 11:40:06 compute-0 sudo[207193]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-swkjiimjqbadmqaiiltxygdrchiexmnh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168406.3183048-1206-275431455903276/AnsiballZ_podman_container_exec.py'
Jan 23 11:40:06 compute-0 sudo[207193]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:40:06 compute-0 python3.9[207195]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 23 11:40:06 compute-0 systemd[1]: Started libpod-conmon-99ee297e6e25b500e7af118e58bbafc761d2fd7202cdfcf4c976c2a99866b5ef.scope.
Jan 23 11:40:06 compute-0 podman[207196]: 2026-01-23 11:40:06.831788191 +0000 UTC m=+0.061941643 container exec 99ee297e6e25b500e7af118e58bbafc761d2fd7202cdfcf4c976c2a99866b5ef (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 23 11:40:06 compute-0 podman[207196]: 2026-01-23 11:40:06.862522041 +0000 UTC m=+0.092675483 container exec_died 99ee297e6e25b500e7af118e58bbafc761d2fd7202cdfcf4c976c2a99866b5ef (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 23 11:40:06 compute-0 systemd[1]: libpod-conmon-99ee297e6e25b500e7af118e58bbafc761d2fd7202cdfcf4c976c2a99866b5ef.scope: Deactivated successfully.
Jan 23 11:40:06 compute-0 sudo[207193]: pam_unix(sudo:session): session closed for user root
Jan 23 11:40:06 compute-0 podman[207212]: 2026-01-23 11:40:06.946060768 +0000 UTC m=+0.113035388 container health_status 1cc877fed4914980324cf4c0d6ba23743fd113442cee4d49cc1a59e402757170 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 23 11:40:07 compute-0 sudo[207400]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cmbomshxbjmaexelwqdvkhppnvncipck ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168407.0687115-1214-276978037826805/AnsiballZ_podman_container_exec.py'
Jan 23 11:40:07 compute-0 sudo[207400]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:40:07 compute-0 python3.9[207402]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 23 11:40:07 compute-0 systemd[1]: Started libpod-conmon-99ee297e6e25b500e7af118e58bbafc761d2fd7202cdfcf4c976c2a99866b5ef.scope.
Jan 23 11:40:07 compute-0 podman[207403]: 2026-01-23 11:40:07.622056979 +0000 UTC m=+0.086089881 container exec 99ee297e6e25b500e7af118e58bbafc761d2fd7202cdfcf4c976c2a99866b5ef (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 23 11:40:07 compute-0 podman[207403]: 2026-01-23 11:40:07.654731377 +0000 UTC m=+0.118764279 container exec_died 99ee297e6e25b500e7af118e58bbafc761d2fd7202cdfcf4c976c2a99866b5ef (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 23 11:40:07 compute-0 systemd[1]: libpod-conmon-99ee297e6e25b500e7af118e58bbafc761d2fd7202cdfcf4c976c2a99866b5ef.scope: Deactivated successfully.
Jan 23 11:40:07 compute-0 sudo[207400]: pam_unix(sudo:session): session closed for user root
Jan 23 11:40:08 compute-0 sudo[207584]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yvvosvwaqciwekevigimpvyxdcqejhem ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168407.9149957-1222-150064224101942/AnsiballZ_file.py'
Jan 23 11:40:08 compute-0 sudo[207584]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:40:08 compute-0 python3.9[207586]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/node_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:40:08 compute-0 sudo[207584]: pam_unix(sudo:session): session closed for user root
Jan 23 11:40:08 compute-0 sudo[207736]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fzssxvorafusgswelgloaqcwvyyjrmmu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168408.7035556-1231-82644522729848/AnsiballZ_podman_container_info.py'
Jan 23 11:40:08 compute-0 sudo[207736]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:40:09 compute-0 python3.9[207738]: ansible-containers.podman.podman_container_info Invoked with name=['podman_exporter'] executable=podman
Jan 23 11:40:09 compute-0 sudo[207736]: pam_unix(sudo:session): session closed for user root
Jan 23 11:40:09 compute-0 sudo[207901]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jyjgulqogsoskmynmdibmfcqywitdemw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168409.4264631-1239-201417952391826/AnsiballZ_podman_container_exec.py'
Jan 23 11:40:09 compute-0 sudo[207901]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:40:09 compute-0 python3.9[207903]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 23 11:40:10 compute-0 systemd[1]: Started libpod-conmon-48bfd3e93cfb033a8917f154ab637a84f3f60f7609564292c230ce848bae7693.scope.
Jan 23 11:40:10 compute-0 podman[207904]: 2026-01-23 11:40:10.013461655 +0000 UTC m=+0.085971608 container exec 48bfd3e93cfb033a8917f154ab637a84f3f60f7609564292c230ce848bae7693 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 23 11:40:10 compute-0 podman[207904]: 2026-01-23 11:40:10.047669391 +0000 UTC m=+0.120179344 container exec_died 48bfd3e93cfb033a8917f154ab637a84f3f60f7609564292c230ce848bae7693 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 23 11:40:10 compute-0 systemd[1]: libpod-conmon-48bfd3e93cfb033a8917f154ab637a84f3f60f7609564292c230ce848bae7693.scope: Deactivated successfully.
Jan 23 11:40:10 compute-0 sudo[207901]: pam_unix(sudo:session): session closed for user root
Jan 23 11:40:10 compute-0 sudo[208085]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xuajguqvmoirespogaylszbjgmkncxxc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168410.2553449-1247-94174269132530/AnsiballZ_podman_container_exec.py'
Jan 23 11:40:10 compute-0 sudo[208085]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:40:10 compute-0 python3.9[208087]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 23 11:40:10 compute-0 systemd[1]: Started libpod-conmon-48bfd3e93cfb033a8917f154ab637a84f3f60f7609564292c230ce848bae7693.scope.
Jan 23 11:40:10 compute-0 podman[208088]: 2026-01-23 11:40:10.840916903 +0000 UTC m=+0.064265340 container exec 48bfd3e93cfb033a8917f154ab637a84f3f60f7609564292c230ce848bae7693 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 23 11:40:10 compute-0 podman[208088]: 2026-01-23 11:40:10.873501398 +0000 UTC m=+0.096849835 container exec_died 48bfd3e93cfb033a8917f154ab637a84f3f60f7609564292c230ce848bae7693 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 23 11:40:10 compute-0 systemd[1]: libpod-conmon-48bfd3e93cfb033a8917f154ab637a84f3f60f7609564292c230ce848bae7693.scope: Deactivated successfully.
Jan 23 11:40:10 compute-0 sudo[208085]: pam_unix(sudo:session): session closed for user root
Jan 23 11:40:11 compute-0 sudo[208268]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pmyahlkgnmlpnxppczivxssdvdbwnjai ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168411.0805347-1255-235042824709479/AnsiballZ_file.py'
Jan 23 11:40:11 compute-0 sudo[208268]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:40:11 compute-0 python3.9[208270]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/podman_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:40:11 compute-0 sudo[208268]: pam_unix(sudo:session): session closed for user root
Jan 23 11:40:12 compute-0 sudo[208422]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nmrlfbvgrypvxflfmaxogdwtqcfkwaxz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168411.896874-1264-194292875983887/AnsiballZ_podman_container_info.py'
Jan 23 11:40:12 compute-0 sudo[208422]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:40:12 compute-0 python3.9[208424]: ansible-containers.podman.podman_container_info Invoked with name=['openstack_network_exporter'] executable=podman
Jan 23 11:40:12 compute-0 sudo[208422]: pam_unix(sudo:session): session closed for user root
Jan 23 11:40:12 compute-0 sudo[208588]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oryjpsagleylddhutktpwqvtrigupmuh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168412.6613662-1272-118870959926487/AnsiballZ_podman_container_exec.py'
Jan 23 11:40:12 compute-0 sudo[208588]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:40:13 compute-0 python3.9[208590]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 23 11:40:13 compute-0 systemd[1]: Started libpod-conmon-cde20f10ae383cce1365a41265bac0a75ea71c31a21a1539f187bef9d678e8d7.scope.
Jan 23 11:40:13 compute-0 podman[208591]: 2026-01-23 11:40:13.183533001 +0000 UTC m=+0.067841859 container exec cde20f10ae383cce1365a41265bac0a75ea71c31a21a1539f187bef9d678e8d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, architecture=x86_64, name=ubi9-minimal, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=openstack_network_exporter, managed_by=edpm_ansible)
Jan 23 11:40:13 compute-0 podman[208591]: 2026-01-23 11:40:13.218676031 +0000 UTC m=+0.102984889 container exec_died cde20f10ae383cce1365a41265bac0a75ea71c31a21a1539f187bef9d678e8d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, distribution-scope=public, io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., vcs-type=git, io.openshift.expose-services=, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., release=1755695350, config_id=openstack_network_exporter, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container)
Jan 23 11:40:13 compute-0 systemd[1]: libpod-conmon-cde20f10ae383cce1365a41265bac0a75ea71c31a21a1539f187bef9d678e8d7.scope: Deactivated successfully.
Jan 23 11:40:13 compute-0 sudo[208588]: pam_unix(sudo:session): session closed for user root
Jan 23 11:40:13 compute-0 sudo[208772]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iscqvbqdifnwhacrhqrqsfrumltmdqyk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168413.4179833-1280-216802501265092/AnsiballZ_podman_container_exec.py'
Jan 23 11:40:13 compute-0 sudo[208772]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:40:13 compute-0 python3.9[208774]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 23 11:40:14 compute-0 systemd[1]: Started libpod-conmon-cde20f10ae383cce1365a41265bac0a75ea71c31a21a1539f187bef9d678e8d7.scope.
Jan 23 11:40:14 compute-0 podman[208775]: 2026-01-23 11:40:14.032901052 +0000 UTC m=+0.098175230 container exec cde20f10ae383cce1365a41265bac0a75ea71c31a21a1539f187bef9d678e8d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, release=1755695350, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, config_id=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., version=9.6, container_name=openstack_network_exporter)
Jan 23 11:40:14 compute-0 podman[208775]: 2026-01-23 11:40:14.067825056 +0000 UTC m=+0.133099204 container exec_died cde20f10ae383cce1365a41265bac0a75ea71c31a21a1539f187bef9d678e8d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., release=1755695350, build-date=2025-08-20T13:12:41, name=ubi9-minimal, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.buildah.version=1.33.7, vcs-type=git, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, config_id=openstack_network_exporter, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, architecture=x86_64)
Jan 23 11:40:14 compute-0 systemd[1]: libpod-conmon-cde20f10ae383cce1365a41265bac0a75ea71c31a21a1539f187bef9d678e8d7.scope: Deactivated successfully.
Jan 23 11:40:14 compute-0 sshd-session[208370]: Received disconnect from 58.82.169.249 port 47154:11:  [preauth]
Jan 23 11:40:14 compute-0 sshd-session[208370]: Disconnected from authenticating user root 58.82.169.249 port 47154 [preauth]
Jan 23 11:40:14 compute-0 sudo[208772]: pam_unix(sudo:session): session closed for user root
Jan 23 11:40:14 compute-0 sudo[208957]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tqbuiavssniefbqtozjrpctlxghenarj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168414.3079338-1288-33507708595097/AnsiballZ_file.py'
Jan 23 11:40:14 compute-0 sudo[208957]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:40:14 compute-0 python3.9[208959]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/openstack_network_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:40:14 compute-0 sudo[208957]: pam_unix(sudo:session): session closed for user root
Jan 23 11:40:15 compute-0 sudo[209109]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cmnwnfylynvkcesilphholcwacumgipe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168415.0489287-1297-60797121843740/AnsiballZ_file.py'
Jan 23 11:40:15 compute-0 sudo[209109]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:40:15 compute-0 python3.9[209111]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall/ state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:40:15 compute-0 sudo[209109]: pam_unix(sudo:session): session closed for user root
Jan 23 11:40:16 compute-0 sudo[209261]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-masjuqzzwlhfxdsqmknkawrsbcgcunvz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168415.7293696-1305-34895058062738/AnsiballZ_stat.py'
Jan 23 11:40:16 compute-0 sudo[209261]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:40:16 compute-0 python3.9[209263]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/telemetry.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:40:16 compute-0 sudo[209261]: pam_unix(sudo:session): session closed for user root
Jan 23 11:40:16 compute-0 sudo[209384]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vaeocmxazxjmqwxgtyfszjijxrsnjvvk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168415.7293696-1305-34895058062738/AnsiballZ_copy.py'
Jan 23 11:40:16 compute-0 sudo[209384]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:40:16 compute-0 python3.9[209386]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/telemetry.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1769168415.7293696-1305-34895058062738/.source.yaml _original_basename=firewall.yaml follow=False checksum=d942d984493b214bda2913f753ff68cdcedff00e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:40:16 compute-0 sudo[209384]: pam_unix(sudo:session): session closed for user root
Jan 23 11:40:17 compute-0 sudo[209549]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-shhjvvjnpbxyqdzcncvdjlvifipmmsxv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168416.9868531-1321-139467340206743/AnsiballZ_file.py'
Jan 23 11:40:17 compute-0 sudo[209549]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:40:17 compute-0 podman[209510]: 2026-01-23 11:40:17.291353414 +0000 UTC m=+0.059892203 container health_status 99ee297e6e25b500e7af118e58bbafc761d2fd7202cdfcf4c976c2a99866b5ef (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 23 11:40:17 compute-0 python3.9[209557]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:40:17 compute-0 sudo[209549]: pam_unix(sudo:session): session closed for user root
Jan 23 11:40:17 compute-0 sudo[209712]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-atwdhmtcskknomfrawtoihagverhwigv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168417.6609042-1329-113500124573353/AnsiballZ_stat.py'
Jan 23 11:40:17 compute-0 sudo[209712]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:40:18 compute-0 python3.9[209714]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:40:18 compute-0 sudo[209712]: pam_unix(sudo:session): session closed for user root
Jan 23 11:40:18 compute-0 sudo[209790]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bxgrmjyyhtfddypkatofolcjacaldcsq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168417.6609042-1329-113500124573353/AnsiballZ_file.py'
Jan 23 11:40:18 compute-0 sudo[209790]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:40:18 compute-0 python3.9[209792]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:40:18 compute-0 sudo[209790]: pam_unix(sudo:session): session closed for user root
Jan 23 11:40:19 compute-0 sudo[209942]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vtpjrzsixhgbngfgrwxzasblbgtwbvxf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168418.796483-1341-149284617220478/AnsiballZ_stat.py'
Jan 23 11:40:19 compute-0 sudo[209942]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:40:19 compute-0 python3.9[209944]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:40:19 compute-0 sudo[209942]: pam_unix(sudo:session): session closed for user root
Jan 23 11:40:19 compute-0 sudo[210021]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-npbtdqoveuuchnzkhllzkgjnatkzoqmg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168418.796483-1341-149284617220478/AnsiballZ_file.py'
Jan 23 11:40:19 compute-0 sudo[210021]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:40:19 compute-0 python3.9[210023]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.vhm7di75 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:40:19 compute-0 sudo[210021]: pam_unix(sudo:session): session closed for user root
Jan 23 11:40:20 compute-0 sudo[210173]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-svjnpofdflwiokjytczagowzsvzxvtsq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168420.062825-1353-241117371889693/AnsiballZ_stat.py'
Jan 23 11:40:20 compute-0 sudo[210173]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:40:20 compute-0 python3.9[210175]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:40:20 compute-0 sudo[210173]: pam_unix(sudo:session): session closed for user root
Jan 23 11:40:20 compute-0 sudo[210251]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mfelacnxtqdezzydjclznvusxvynvjyw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168420.062825-1353-241117371889693/AnsiballZ_file.py'
Jan 23 11:40:20 compute-0 sudo[210251]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:40:20 compute-0 python3.9[210253]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:40:20 compute-0 sudo[210251]: pam_unix(sudo:session): session closed for user root
Jan 23 11:40:21 compute-0 sudo[210403]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oqcyhwmqhfomcbxtflozjcjmbwemwtkc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168421.2561033-1366-66100975344623/AnsiballZ_command.py'
Jan 23 11:40:21 compute-0 sudo[210403]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:40:21 compute-0 python3.9[210405]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 11:40:21 compute-0 sudo[210403]: pam_unix(sudo:session): session closed for user root
Jan 23 11:40:22 compute-0 sudo[210573]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-woyulbazfhwcjchzkfveiuxfugnagcfr ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769168422.0257444-1374-234166721399616/AnsiballZ_edpm_nftables_from_files.py'
Jan 23 11:40:22 compute-0 sudo[210573]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:40:22 compute-0 podman[210530]: 2026-01-23 11:40:22.511090063 +0000 UTC m=+0.068055395 container health_status cde20f10ae383cce1365a41265bac0a75ea71c31a21a1539f187bef9d678e8d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, managed_by=edpm_ansible, vcs-type=git, architecture=x86_64, release=1755695350, build-date=2025-08-20T13:12:41, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., config_id=openstack_network_exporter, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.buildah.version=1.33.7)
Jan 23 11:40:22 compute-0 python3[210577]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 23 11:40:22 compute-0 sudo[210573]: pam_unix(sudo:session): session closed for user root
Jan 23 11:40:23 compute-0 sudo[210729]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xhpkicupvjixyqxhsuuiygogykixensf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168422.9289043-1382-249260081478015/AnsiballZ_stat.py'
Jan 23 11:40:23 compute-0 sudo[210729]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:40:23 compute-0 python3.9[210731]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:40:24 compute-0 sudo[210729]: pam_unix(sudo:session): session closed for user root
Jan 23 11:40:24 compute-0 sudo[210807]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yreskwvnjhbnnddfsiitpzbhcmtrwqtu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168422.9289043-1382-249260081478015/AnsiballZ_file.py'
Jan 23 11:40:24 compute-0 sudo[210807]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:40:24 compute-0 python3.9[210809]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:40:24 compute-0 sudo[210807]: pam_unix(sudo:session): session closed for user root
Jan 23 11:40:25 compute-0 sudo[210959]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-clnsropdzjylptidqdfhxtbohkzobjha ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168425.1694226-1394-75908190792272/AnsiballZ_stat.py'
Jan 23 11:40:25 compute-0 sudo[210959]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:40:25 compute-0 python3.9[210961]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:40:25 compute-0 sudo[210959]: pam_unix(sudo:session): session closed for user root
Jan 23 11:40:26 compute-0 sudo[211037]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ahttsgscquwjghmqsusydonisjidobjx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168425.1694226-1394-75908190792272/AnsiballZ_file.py'
Jan 23 11:40:26 compute-0 sudo[211037]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:40:26 compute-0 python3.9[211039]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:40:26 compute-0 sudo[211037]: pam_unix(sudo:session): session closed for user root
Jan 23 11:40:26 compute-0 sudo[211189]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cxftqwylfoivuqfwecjzlxexpphmxxos ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168426.4301019-1406-212384494786509/AnsiballZ_stat.py'
Jan 23 11:40:26 compute-0 sudo[211189]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:40:26 compute-0 python3.9[211191]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:40:26 compute-0 sudo[211189]: pam_unix(sudo:session): session closed for user root
Jan 23 11:40:27 compute-0 sudo[211267]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jtlhbicfxwtlnxaxjfrknvgocypleexh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168426.4301019-1406-212384494786509/AnsiballZ_file.py'
Jan 23 11:40:27 compute-0 sudo[211267]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:40:27 compute-0 python3.9[211269]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:40:27 compute-0 sudo[211267]: pam_unix(sudo:session): session closed for user root
Jan 23 11:40:28 compute-0 sudo[211419]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kldlhpyucgkanblhvplxyoafnekaiwqa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168427.7792645-1418-197920979114555/AnsiballZ_stat.py'
Jan 23 11:40:28 compute-0 sudo[211419]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:40:28 compute-0 python3.9[211421]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:40:28 compute-0 sudo[211419]: pam_unix(sudo:session): session closed for user root
Jan 23 11:40:28 compute-0 sudo[211497]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wamrutdoyupscognypgvwmzfxhzopmek ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168427.7792645-1418-197920979114555/AnsiballZ_file.py'
Jan 23 11:40:28 compute-0 sudo[211497]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:40:28 compute-0 python3.9[211499]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:40:28 compute-0 sudo[211497]: pam_unix(sudo:session): session closed for user root
Jan 23 11:40:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:40:29.079 106832 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 11:40:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:40:29.081 106832 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 11:40:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:40:29.081 106832 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 11:40:29 compute-0 sudo[211649]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xfsxtyyproqxxfppkjlinmxgyeuvtgen ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168428.93809-1430-239295227466241/AnsiballZ_stat.py'
Jan 23 11:40:29 compute-0 sudo[211649]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:40:29 compute-0 python3.9[211651]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:40:29 compute-0 sudo[211649]: pam_unix(sudo:session): session closed for user root
Jan 23 11:40:29 compute-0 sudo[211774]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hlsflhunattoxrsmtqshkbpyphviwfvt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168428.93809-1430-239295227466241/AnsiballZ_copy.py'
Jan 23 11:40:29 compute-0 sudo[211774]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:40:30 compute-0 python3.9[211776]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769168428.93809-1430-239295227466241/.source.nft follow=False _original_basename=ruleset.j2 checksum=fb3275eced3a2e06312143189928124e1b2df34a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:40:30 compute-0 sudo[211774]: pam_unix(sudo:session): session closed for user root
Jan 23 11:40:30 compute-0 sudo[211926]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xzrqeaxwyztlhutkufdkokvpzsrdfijr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168430.3454258-1445-63215790642285/AnsiballZ_file.py'
Jan 23 11:40:30 compute-0 sudo[211926]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:40:30 compute-0 python3.9[211928]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:40:30 compute-0 sudo[211926]: pam_unix(sudo:session): session closed for user root
Jan 23 11:40:31 compute-0 sudo[212086]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zsdxztmrmrfdxoiusctesqgzuilugwjb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168431.1050444-1453-132008210322584/AnsiballZ_command.py'
Jan 23 11:40:31 compute-0 sudo[212086]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:40:31 compute-0 podman[212052]: 2026-01-23 11:40:31.393849332 +0000 UTC m=+0.058426996 container health_status 6ec039018dddd109dd56b3f3912ce4a80c166b5fb98c417c5e3cfbbdfbfbeaad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260120, org.label-schema.license=GPLv2, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=93ecf842527b95c82e14fba92451bd07)
Jan 23 11:40:31 compute-0 python3.9[212095]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 11:40:31 compute-0 sudo[212086]: pam_unix(sudo:session): session closed for user root
Jan 23 11:40:32 compute-0 sudo[212253]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ojmrkvrllwluxegsoqqxhowrrpifcuxt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168431.813916-1461-113877036487072/AnsiballZ_blockinfile.py'
Jan 23 11:40:32 compute-0 sudo[212253]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:40:32 compute-0 python3.9[212255]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:40:32 compute-0 sudo[212253]: pam_unix(sudo:session): session closed for user root
Jan 23 11:40:32 compute-0 sudo[212405]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lzcineetzjdvnzeoznnqxuojqatonovm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168432.738725-1470-107116708884044/AnsiballZ_command.py'
Jan 23 11:40:32 compute-0 sudo[212405]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:40:33 compute-0 python3.9[212407]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 11:40:33 compute-0 sudo[212405]: pam_unix(sudo:session): session closed for user root
Jan 23 11:40:33 compute-0 sudo[212575]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bqnvbtttzcupulkhqabktpxwbgxigpmc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168433.3969269-1478-206892356300546/AnsiballZ_stat.py'
Jan 23 11:40:33 compute-0 sudo[212575]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:40:33 compute-0 podman[212532]: 2026-01-23 11:40:33.709415881 +0000 UTC m=+0.054090999 container health_status 48bfd3e93cfb033a8917f154ab637a84f3f60f7609564292c230ce848bae7693 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 23 11:40:33 compute-0 python3.9[212584]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 11:40:33 compute-0 sudo[212575]: pam_unix(sudo:session): session closed for user root
Jan 23 11:40:34 compute-0 sudo[212736]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xkcbsgkembtiehckaogtogqqgcftlzwu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168434.146918-1486-82457697153029/AnsiballZ_command.py'
Jan 23 11:40:34 compute-0 sudo[212736]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:40:34 compute-0 python3.9[212738]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 11:40:34 compute-0 sudo[212736]: pam_unix(sudo:session): session closed for user root
Jan 23 11:40:35 compute-0 sudo[212891]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lftnworayvvvtqqzccyoqqqnhjzsbpwr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168434.7687533-1494-248637893527648/AnsiballZ_file.py'
Jan 23 11:40:35 compute-0 sudo[212891]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:40:35 compute-0 python3.9[212893]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:40:35 compute-0 sudo[212891]: pam_unix(sudo:session): session closed for user root
Jan 23 11:40:35 compute-0 openstack_network_exporter[204160]: ERROR   11:40:35 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 23 11:40:35 compute-0 openstack_network_exporter[204160]: 
Jan 23 11:40:35 compute-0 openstack_network_exporter[204160]: ERROR   11:40:35 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 23 11:40:35 compute-0 openstack_network_exporter[204160]: 
Jan 23 11:40:35 compute-0 sshd-session[185539]: Connection closed by 192.168.122.30 port 32792
Jan 23 11:40:35 compute-0 sshd-session[185536]: pam_unix(sshd:session): session closed for user zuul
Jan 23 11:40:35 compute-0 systemd[1]: session-26.scope: Deactivated successfully.
Jan 23 11:40:35 compute-0 systemd[1]: session-26.scope: Consumed 1min 44.765s CPU time.
Jan 23 11:40:35 compute-0 systemd-logind[798]: Session 26 logged out. Waiting for processes to exit.
Jan 23 11:40:35 compute-0 systemd-logind[798]: Removed session 26.
Jan 23 11:40:35 compute-0 podman[212923]: 2026-01-23 11:40:35.683307909 +0000 UTC m=+0.046286856 container health_status d96827cd9c29e53bbdf4cef10942608e4ba405294733072b4aa624c0238e2ed8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 23 11:40:36 compute-0 podman[201022]: time="2026-01-23T11:40:36Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 23 11:40:36 compute-0 podman[201022]: @ - - [23/Jan/2026:11:40:36 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 21257 "" "Go-http-client/1.1"
Jan 23 11:40:36 compute-0 podman[201022]: @ - - [23/Jan/2026:11:40:36 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 2986 "" "Go-http-client/1.1"
Jan 23 11:40:37 compute-0 podman[212947]: 2026-01-23 11:40:37.746950375 +0000 UTC m=+0.079832755 container health_status 1cc877fed4914980324cf4c0d6ba23743fd113442cee4d49cc1a59e402757170 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 23 11:40:41 compute-0 sshd-session[212973]: Accepted publickey for zuul from 192.168.122.30 port 43994 ssh2: ECDSA SHA256:AUEDGm/wgPOySUg5KweIs4KJvJDZMkuE7T7y2BxO92Y
Jan 23 11:40:41 compute-0 systemd-logind[798]: New session 27 of user zuul.
Jan 23 11:40:41 compute-0 systemd[1]: Started Session 27 of User zuul.
Jan 23 11:40:41 compute-0 sshd-session[212973]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 23 11:40:42 compute-0 sudo[213126]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ruwsdxvbjqwgumwzobgqruhlrukxexdp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168442.1009352-19-48038188118810/AnsiballZ_systemd_service.py'
Jan 23 11:40:42 compute-0 sudo[213126]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:40:43 compute-0 python3.9[213128]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 23 11:40:43 compute-0 systemd[1]: Reloading.
Jan 23 11:40:43 compute-0 systemd-sysv-generator[213161]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 11:40:43 compute-0 systemd-rc-local-generator[213158]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 11:40:43 compute-0 sudo[213126]: pam_unix(sudo:session): session closed for user root
Jan 23 11:40:44 compute-0 python3.9[213314]: ansible-ansible.builtin.service_facts Invoked
Jan 23 11:40:44 compute-0 network[213331]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 23 11:40:44 compute-0 network[213332]: 'network-scripts' will be removed from distribution in near future.
Jan 23 11:40:44 compute-0 network[213333]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 23 11:40:47 compute-0 podman[213404]: 2026-01-23 11:40:47.419410345 +0000 UTC m=+0.089716553 container health_status 99ee297e6e25b500e7af118e58bbafc761d2fd7202cdfcf4c976c2a99866b5ef (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 23 11:40:48 compute-0 sudo[213626]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xnmzhaavyxpjhuiczqvghywjuxgvvcxp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168448.5532887-42-238333755715773/AnsiballZ_systemd_service.py'
Jan 23 11:40:48 compute-0 sudo[213626]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:40:49 compute-0 python3.9[213628]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ceilometer_agent_ipmi.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 11:40:49 compute-0 sudo[213626]: pam_unix(sudo:session): session closed for user root
Jan 23 11:40:49 compute-0 sudo[213779]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oflxxvwvbtnhxoawilfpfaiegldtfojn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168449.4477239-52-150293404376908/AnsiballZ_file.py'
Jan 23 11:40:49 compute-0 sudo[213779]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:40:49 compute-0 python3.9[213781]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ceilometer_agent_ipmi.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:40:50 compute-0 sudo[213779]: pam_unix(sudo:session): session closed for user root
Jan 23 11:40:50 compute-0 sudo[213931]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-phddeegsarbkyeokcpwyyplhflpeiplg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168450.1945324-60-82953680519557/AnsiballZ_file.py'
Jan 23 11:40:50 compute-0 sudo[213931]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:40:50 compute-0 python3.9[213933]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_ipmi.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:40:50 compute-0 sudo[213931]: pam_unix(sudo:session): session closed for user root
Jan 23 11:40:51 compute-0 sudo[214083]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ackcrlhtrzuvxgphhwzphsafxdxlkvnq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168450.8632731-69-56849064788866/AnsiballZ_command.py'
Jan 23 11:40:51 compute-0 sudo[214083]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:40:51 compute-0 python3.9[214085]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 11:40:51 compute-0 sudo[214083]: pam_unix(sudo:session): session closed for user root
Jan 23 11:40:52 compute-0 python3.9[214237]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 23 11:40:52 compute-0 podman[214337]: 2026-01-23 11:40:52.707329085 +0000 UTC m=+0.047773337 container health_status cde20f10ae383cce1365a41265bac0a75ea71c31a21a1539f187bef9d678e8d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=openstack_network_exporter, maintainer=Red Hat, Inc., release=1755695350, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, version=9.6, managed_by=edpm_ansible, name=ubi9-minimal, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, vcs-type=git)
Jan 23 11:40:52 compute-0 sudo[214409]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qxcbovucsrvxmayxmcomzyzqnfwtrhhl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168452.512994-87-20194733429662/AnsiballZ_systemd_service.py'
Jan 23 11:40:52 compute-0 sudo[214409]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:40:53 compute-0 python3.9[214411]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 23 11:40:53 compute-0 systemd[1]: Reloading.
Jan 23 11:40:53 compute-0 systemd-rc-local-generator[214430]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 11:40:53 compute-0 systemd-sysv-generator[214437]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 11:40:53 compute-0 sudo[214409]: pam_unix(sudo:session): session closed for user root
Jan 23 11:40:53 compute-0 sudo[214596]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-knwrrmmohkgehbnaaugaljvjdnedzfom ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168453.5745788-95-257643234083734/AnsiballZ_command.py'
Jan 23 11:40:53 compute-0 sudo[214596]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:40:54 compute-0 python3.9[214598]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ceilometer_agent_ipmi.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 11:40:54 compute-0 sudo[214596]: pam_unix(sudo:session): session closed for user root
Jan 23 11:40:54 compute-0 sudo[214749]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jscyxdsahiqggkwvaaprirlkvcaatgww ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168454.3824542-104-126068475271346/AnsiballZ_file.py'
Jan 23 11:40:54 compute-0 sudo[214749]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:40:54 compute-0 python3.9[214751]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/openstack/telemetry-power-monitoring recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 11:40:54 compute-0 sudo[214749]: pam_unix(sudo:session): session closed for user root
Jan 23 11:40:55 compute-0 python3.9[214901]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 11:40:56 compute-0 python3.9[215053]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry-power-monitoring/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:40:57 compute-0 python3.9[215174]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry-power-monitoring/ceilometer-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769168455.9699514-120-26665716442633/.source.conf follow=False _original_basename=ceilometer-host-specific.conf.j2 checksum=e86e0e43000ce9ccfe5aefbf8e8f2e3d15d05584 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 23 11:40:57 compute-0 python3.9[215324]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry-power-monitoring/firewall.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:40:58 compute-0 python3.9[215445]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry-power-monitoring/firewall.yaml mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769168457.3386843-135-27023521065001/.source.yaml _original_basename=firewall.yaml follow=False checksum=40b8960d32c81de936cddbeb137a8240ecc54e7b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 23 11:40:59 compute-0 sudo[215595]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yxcgmlyudazkbgjouspycbwvfruaydjx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168458.6705034-153-184975624022556/AnsiballZ_getent.py'
Jan 23 11:40:59 compute-0 sudo[215595]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:40:59 compute-0 python3.9[215597]: ansible-ansible.builtin.getent Invoked with database=passwd key=ceilometer fail_key=True service=None split=None
Jan 23 11:40:59 compute-0 sudo[215595]: pam_unix(sudo:session): session closed for user root
Jan 23 11:40:59 compute-0 podman[201022]: time="2026-01-23T11:40:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 23 11:40:59 compute-0 podman[201022]: @ - - [23/Jan/2026:11:40:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 21257 "" "Go-http-client/1.1"
Jan 23 11:40:59 compute-0 podman[201022]: @ - - [23/Jan/2026:11:40:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 2991 "" "Go-http-client/1.1"
Jan 23 11:41:00 compute-0 python3.9[215749]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry-power-monitoring/ceilometer.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:41:01 compute-0 python3.9[215870]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry-power-monitoring/ceilometer.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1769168460.2768607-181-132267355925060/.source.conf _original_basename=ceilometer.conf follow=False checksum=f817847bb0474d7c55a7ad9afdea5f1400a30720 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:41:01 compute-0 nova_compute[185173]: 2026-01-23 11:41:01.231 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:41:01 compute-0 nova_compute[185173]: 2026-01-23 11:41:01.234 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:41:01 compute-0 nova_compute[185173]: 2026-01-23 11:41:01.274 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 11:41:01 compute-0 nova_compute[185173]: 2026-01-23 11:41:01.274 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 11:41:01 compute-0 nova_compute[185173]: 2026-01-23 11:41:01.274 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 11:41:01 compute-0 nova_compute[185173]: 2026-01-23 11:41:01.275 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 23 11:41:01 compute-0 nova_compute[185173]: 2026-01-23 11:41:01.398 185177 WARNING nova.virt.libvirt.driver [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 23 11:41:01 compute-0 nova_compute[185173]: 2026-01-23 11:41:01.399 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5872MB free_disk=72.48031234741211GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 23 11:41:01 compute-0 nova_compute[185173]: 2026-01-23 11:41:01.399 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 11:41:01 compute-0 nova_compute[185173]: 2026-01-23 11:41:01.399 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 11:41:01 compute-0 openstack_network_exporter[204160]: ERROR   11:41:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 23 11:41:01 compute-0 openstack_network_exporter[204160]: 
Jan 23 11:41:01 compute-0 openstack_network_exporter[204160]: ERROR   11:41:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 23 11:41:01 compute-0 openstack_network_exporter[204160]: 
Jan 23 11:41:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:41:01.448 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 23 11:41:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:41:01.449 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 23 11:41:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:41:01.449 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283aa65b80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:41:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:41:01.449 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f28410bc7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:41:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:41:01.449 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be810>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283aa65b80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:41:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:41:01.450 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be840>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283aa65b80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:41:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:41:01.450 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283aa65b80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:41:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:41:01.450 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be8a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283aa65b80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:41:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:41:01.450 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc8f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283aa65b80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:41:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:41:01.450 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be900>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283aa65b80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:41:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:41:01.450 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bf140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283aa65b80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:41:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:41:01.450 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be960>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283aa65b80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:41:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:41:01.451 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2842f61190>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283aa65b80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:41:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:41:01.451 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28411c9190>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283aa65b80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:41:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:41:01.451 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 11:41:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:41:01.451 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be9c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283aa65b80>] with cache [{}], pollster history [{'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:41:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:41:01.451 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f28410be7e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:41:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:41:01.451 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bf1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283aa65b80>] with cache [{}], pollster history [{'network.outgoing.bytes.delta': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:41:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:41:01.451 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 11:41:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:41:01.451 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bec00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283aa65b80>] with cache [{}], pollster history [{'network.outgoing.bytes.delta': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:41:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:41:01.451 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f28411c9b80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:41:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:41:01.452 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bf440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283aa65b80>] with cache [{}], pollster history [{'network.outgoing.bytes.delta': [], 'disk.device.usage': [], 'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:41:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:41:01.452 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 11:41:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:41:01.452 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bec60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283aa65b80>] with cache [{}], pollster history [{'network.outgoing.bytes.delta': [], 'disk.device.usage': [], 'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:41:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:41:01.452 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f28410bc830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:41:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:41:01.452 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2842f83560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283aa65b80>] with cache [{}], pollster history [{'network.outgoing.bytes.delta': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'network.outgoing.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:41:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:41:01.452 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 11:41:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:41:01.452 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283aa65b80>] with cache [{}], pollster history [{'network.outgoing.bytes.delta': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'network.outgoing.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:41:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:41:01.452 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f28410be870>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:41:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:41:01.452 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc5c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283aa65b80>] with cache [{}], pollster history [{'network.outgoing.bytes.delta': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'network.outgoing.bytes.rate': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:41:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:41:01.452 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 11:41:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:41:01.453 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283aa65b80>] with cache [{}], pollster history [{'network.outgoing.bytes.delta': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'network.outgoing.bytes.rate': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:41:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:41:01.453 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f28410bc8c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:41:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:41:01.453 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be660>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283aa65b80>] with cache [{}], pollster history [{'network.outgoing.bytes.delta': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'network.outgoing.bytes.rate': [], 'disk.device.write.latency': [], 'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:41:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:41:01.453 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 11:41:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:41:01.453 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283aa65b80>] with cache [{}], pollster history [{'network.outgoing.bytes.delta': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'network.outgoing.bytes.rate': [], 'disk.device.write.latency': [], 'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:41:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:41:01.453 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f28410be8d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:41:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:41:01.453 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283aa65b80>] with cache [{}], pollster history [{'network.outgoing.bytes.delta': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'network.outgoing.bytes.rate': [], 'disk.device.write.latency': [], 'network.incoming.packets.error': [], 'disk.device.write.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:41:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:41:01.453 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 11:41:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:41:01.453 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2842f1af60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283aa65b80>] with cache [{}], pollster history [{'network.outgoing.bytes.delta': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'network.outgoing.bytes.rate': [], 'disk.device.write.latency': [], 'network.incoming.packets.error': [], 'disk.device.write.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:41:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:41:01.454 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f28410bef30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:41:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:41:01.454 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283aa65b80>] with cache [{}], pollster history [{'network.outgoing.bytes.delta': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'network.outgoing.bytes.rate': [], 'disk.device.write.latency': [], 'network.incoming.packets.error': [], 'disk.device.write.requests': [], 'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:41:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:41:01.454 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 11:41:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:41:01.454 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be7b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283aa65b80>] with cache [{}], pollster history [{'network.outgoing.bytes.delta': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'network.outgoing.bytes.rate': [], 'disk.device.write.latency': [], 'network.incoming.packets.error': [], 'disk.device.write.requests': [], 'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:41:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:41:01.454 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f28410be930>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:41:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:41:01.454 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 11:41:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:41:01.454 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f28410be750>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:41:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:41:01.454 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 11:41:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:41:01.454 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f28411a4c50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:41:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:41:01.454 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 11:41:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:41:01.454 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f28410be990>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:41:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:41:01.455 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 11:41:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:41:01.455 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f28410bf1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:41:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:41:01.455 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 11:41:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:41:01.455 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f28410bebd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:41:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:41:01.455 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 11:41:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:41:01.455 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f28410bf410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:41:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:41:01.455 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 11:41:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:41:01.455 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f28410bec30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:41:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:41:01.455 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 11:41:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:41:01.455 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f28410bcfb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:41:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:41:01.455 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 11:41:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:41:01.455 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f28410bc920>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:41:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:41:01.455 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 11:41:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:41:01.456 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f28410bc5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:41:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:41:01.456 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 11:41:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:41:01.456 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f28410bc890>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:41:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:41:01.456 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 11:41:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:41:01.456 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f28410be720>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:41:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:41:01.456 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 11:41:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:41:01.456 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f28410bc6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:41:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:41:01.456 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 11:41:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:41:01.456 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f28410bec90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:41:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:41:01.456 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 11:41:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:41:01.456 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f284322b260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:41:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:41:01.456 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 11:41:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:41:01.456 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f28410bc740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:41:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:41:01.456 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 11:41:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:41:01.456 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f28410be780>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:41:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:41:01.457 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 11:41:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:41:01.457 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:41:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:41:01.457 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:41:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:41:01.457 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:41:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:41:01.457 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:41:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:41:01.457 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:41:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:41:01.457 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:41:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:41:01.457 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:41:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:41:01.457 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:41:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:41:01.457 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:41:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:41:01.457 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:41:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:41:01.457 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:41:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:41:01.457 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:41:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:41:01.457 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:41:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:41:01.457 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:41:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:41:01.458 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:41:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:41:01.458 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:41:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:41:01.458 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:41:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:41:01.458 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:41:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:41:01.458 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:41:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:41:01.458 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:41:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:41:01.458 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:41:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:41:01.458 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:41:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:41:01.458 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:41:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:41:01.458 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:41:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:41:01.458 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:41:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:41:01.458 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:41:01 compute-0 nova_compute[185173]: 2026-01-23 11:41:01.481 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 23 11:41:01 compute-0 nova_compute[185173]: 2026-01-23 11:41:01.481 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 23 11:41:01 compute-0 nova_compute[185173]: 2026-01-23 11:41:01.510 185177 DEBUG nova.compute.provider_tree [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Inventory has not changed in ProviderTree for provider: 77dd020c-2f5c-40b0-b660-8a95a28aabbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 23 11:41:01 compute-0 nova_compute[185173]: 2026-01-23 11:41:01.529 185177 DEBUG nova.scheduler.client.report [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Inventory has not changed for provider 77dd020c-2f5c-40b0-b660-8a95a28aabbd based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 23 11:41:01 compute-0 nova_compute[185173]: 2026-01-23 11:41:01.532 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 23 11:41:01 compute-0 nova_compute[185173]: 2026-01-23 11:41:01.532 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.133s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 11:41:01 compute-0 podman[215996]: 2026-01-23 11:41:01.655351452 +0000 UTC m=+0.073681421 container health_status 6ec039018dddd109dd56b3f3912ce4a80c166b5fb98c417c5e3cfbbdfbfbeaad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20260120, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible)
Jan 23 11:41:01 compute-0 python3.9[216028]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry-power-monitoring/polling.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:41:02 compute-0 python3.9[216163]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry-power-monitoring/polling.yaml mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1769168461.3238504-181-232184169246998/.source.yaml _original_basename=polling.yaml follow=False checksum=5ef7021082c6431099dde63e021011029cd65119 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:41:02 compute-0 nova_compute[185173]: 2026-01-23 11:41:02.534 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:41:02 compute-0 nova_compute[185173]: 2026-01-23 11:41:02.535 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 23 11:41:02 compute-0 nova_compute[185173]: 2026-01-23 11:41:02.535 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 23 11:41:02 compute-0 nova_compute[185173]: 2026-01-23 11:41:02.551 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 23 11:41:02 compute-0 nova_compute[185173]: 2026-01-23 11:41:02.551 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:41:02 compute-0 nova_compute[185173]: 2026-01-23 11:41:02.552 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:41:02 compute-0 python3.9[216313]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry-power-monitoring/custom.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:41:03 compute-0 nova_compute[185173]: 2026-01-23 11:41:03.236 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:41:03 compute-0 nova_compute[185173]: 2026-01-23 11:41:03.236 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:41:03 compute-0 python3.9[216434]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry-power-monitoring/custom.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1769168462.4053097-181-59442865918204/.source.conf _original_basename=custom.conf follow=False checksum=838b8b0a7d7f72e55ab67d39f32e3cb3eca2139b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:41:03 compute-0 python3.9[216584]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 11:41:04 compute-0 nova_compute[185173]: 2026-01-23 11:41:04.235 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:41:04 compute-0 podman[216710]: 2026-01-23 11:41:04.25535245 +0000 UTC m=+0.051213702 container health_status 48bfd3e93cfb033a8917f154ab637a84f3f60f7609564292c230ce848bae7693 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 23 11:41:04 compute-0 python3.9[216753]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 11:41:05 compute-0 python3.9[216912]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:41:05 compute-0 nova_compute[185173]: 2026-01-23 11:41:05.235 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:41:05 compute-0 nova_compute[185173]: 2026-01-23 11:41:05.235 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 23 11:41:05 compute-0 python3.9[217033]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1769168464.6217694-240-10459843470109/.source.yaml _original_basename=ceilometer_prom_exporter.yaml follow=False checksum=10157c879411ee6023e506dc85a343cedc52700f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:41:06 compute-0 sudo[217194]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-abvkviywwrtfebucorpmrbdlcyerrioa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168465.7823892-255-277124391466226/AnsiballZ_file.py'
Jan 23 11:41:06 compute-0 sudo[217194]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:41:06 compute-0 podman[217157]: 2026-01-23 11:41:06.057094521 +0000 UTC m=+0.054255026 container health_status d96827cd9c29e53bbdf4cef10942608e4ba405294733072b4aa624c0238e2ed8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Jan 23 11:41:06 compute-0 python3.9[217201]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:41:06 compute-0 sudo[217194]: pam_unix(sudo:session): session closed for user root
Jan 23 11:41:06 compute-0 sudo[217353]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-unytnyvainrinoblzwquivpwjnxyyjqy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168466.4055562-263-131798046495596/AnsiballZ_file.py'
Jan 23 11:41:06 compute-0 sudo[217353]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:41:06 compute-0 python3.9[217355]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:41:06 compute-0 sudo[217353]: pam_unix(sudo:session): session closed for user root
Jan 23 11:41:07 compute-0 sudo[217505]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mrlddovryvfgubgmuloydlvmkoejzyhh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168466.9951453-271-143183813007072/AnsiballZ_file.py'
Jan 23 11:41:07 compute-0 sudo[217505]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:41:07 compute-0 python3.9[217507]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 11:41:07 compute-0 sudo[217505]: pam_unix(sudo:session): session closed for user root
Jan 23 11:41:07 compute-0 sudo[217667]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pwhimgqatdiorxneteognsiqssdqpgak ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168467.6323211-279-68406900100855/AnsiballZ_stat.py'
Jan 23 11:41:07 compute-0 sudo[217667]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:41:07 compute-0 podman[217631]: 2026-01-23 11:41:07.973170956 +0000 UTC m=+0.083956432 container health_status 1cc877fed4914980324cf4c0d6ba23743fd113442cee4d49cc1a59e402757170 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.vendor=CentOS)
Jan 23 11:41:08 compute-0 python3.9[217673]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:41:08 compute-0 sudo[217667]: pam_unix(sudo:session): session closed for user root
Jan 23 11:41:08 compute-0 sudo[217804]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xhrclrdvuxofgsedlzojmjyzcwbgsxbe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168467.6323211-279-68406900100855/AnsiballZ_copy.py'
Jan 23 11:41:08 compute-0 sudo[217804]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:41:08 compute-0 python3.9[217806]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769168467.6323211-279-68406900100855/.source _original_basename=healthcheck follow=False checksum=ebb343c21fce35a02591a9351660cb7035a47d42 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 23 11:41:08 compute-0 sudo[217804]: pam_unix(sudo:session): session closed for user root
Jan 23 11:41:08 compute-0 sudo[217880]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aahofsardzbqcgzznrodrygnbdbudmha ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168467.6323211-279-68406900100855/AnsiballZ_stat.py'
Jan 23 11:41:08 compute-0 sudo[217880]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:41:09 compute-0 python3.9[217882]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/healthcheck.future follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:41:09 compute-0 sudo[217880]: pam_unix(sudo:session): session closed for user root
Jan 23 11:41:09 compute-0 sudo[218003]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gsmotjslklggvgcdfdkebxnmmonvovff ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168467.6323211-279-68406900100855/AnsiballZ_copy.py'
Jan 23 11:41:09 compute-0 sudo[218003]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:41:09 compute-0 python3.9[218005]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769168467.6323211-279-68406900100855/.source.future _original_basename=healthcheck.future follow=False checksum=d500a98192f4ddd70b4dfdc059e2d81aed36a294 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 23 11:41:09 compute-0 sudo[218003]: pam_unix(sudo:session): session closed for user root
Jan 23 11:41:10 compute-0 sudo[218155]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dtfoxpihgqmsnntbxsxghsyvljhqtvhx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168469.7819345-279-35049285017190/AnsiballZ_stat.py'
Jan 23 11:41:10 compute-0 sudo[218155]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:41:10 compute-0 python3.9[218157]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/kepler/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:41:10 compute-0 sudo[218155]: pam_unix(sudo:session): session closed for user root
Jan 23 11:41:10 compute-0 sudo[218278]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nmxdnewqdnslymcdpvywzlcqcnbjfhfs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168469.7819345-279-35049285017190/AnsiballZ_copy.py'
Jan 23 11:41:10 compute-0 sudo[218278]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:41:10 compute-0 python3.9[218280]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/kepler/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769168469.7819345-279-35049285017190/.source _original_basename=healthcheck follow=False checksum=57ed53cc150174efd98819129660d5b9ea9ea61a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 23 11:41:10 compute-0 sudo[218278]: pam_unix(sudo:session): session closed for user root
Jan 23 11:41:11 compute-0 sudo[218430]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ojqjxikhvvxyharlqmdqtgwszgyitsml ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168471.2109077-321-254667945231483/AnsiballZ_file.py'
Jan 23 11:41:11 compute-0 sudo[218430]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:41:11 compute-0 python3.9[218432]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:41:11 compute-0 sudo[218430]: pam_unix(sudo:session): session closed for user root
Jan 23 11:41:12 compute-0 sudo[218582]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvemecurjqvtotydcixyfzownbvagexh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168471.9244988-329-268577632063119/AnsiballZ_file.py'
Jan 23 11:41:12 compute-0 sudo[218582]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:41:12 compute-0 python3.9[218584]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 23 11:41:12 compute-0 sudo[218582]: pam_unix(sudo:session): session closed for user root
Jan 23 11:41:12 compute-0 sudo[218734]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-atedtknehesujjicklsjoibedkutuweq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168472.6749194-337-181982085757046/AnsiballZ_stat.py'
Jan 23 11:41:12 compute-0 sudo[218734]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:41:13 compute-0 python3.9[218736]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ceilometer_agent_ipmi.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:41:13 compute-0 sudo[218734]: pam_unix(sudo:session): session closed for user root
Jan 23 11:41:13 compute-0 sudo[218857]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sltrwhjtwjmfdakexemraurheersznpe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168472.6749194-337-181982085757046/AnsiballZ_copy.py'
Jan 23 11:41:13 compute-0 sudo[218857]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:41:13 compute-0 python3.9[218859]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ceilometer_agent_ipmi.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769168472.6749194-337-181982085757046/.source.json _original_basename=.u_9yhvd7 follow=False checksum=fa47598aea39469905a43b7b570ec2fd120965fc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:41:13 compute-0 sudo[218857]: pam_unix(sudo:session): session closed for user root
Jan 23 11:41:14 compute-0 python3.9[219009]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ceilometer_agent_ipmi state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:41:16 compute-0 sudo[219430]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oytdftxgbttubkywirrnmwaitxxwmegz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168475.7932608-377-227953764457496/AnsiballZ_container_config_data.py'
Jan 23 11:41:16 compute-0 sudo[219430]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:41:16 compute-0 python3.9[219432]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ceilometer_agent_ipmi config_pattern=*.json debug=False
Jan 23 11:41:16 compute-0 sudo[219430]: pam_unix(sudo:session): session closed for user root
Jan 23 11:41:17 compute-0 sudo[219582]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hhijkypdxjmkfgcbvpfsiekpwrtwylhl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168476.80412-388-103457638082361/AnsiballZ_container_config_hash.py'
Jan 23 11:41:17 compute-0 sudo[219582]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:41:17 compute-0 python3.9[219584]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 23 11:41:17 compute-0 sudo[219582]: pam_unix(sudo:session): session closed for user root
Jan 23 11:41:17 compute-0 podman[219609]: 2026-01-23 11:41:17.709642379 +0000 UTC m=+0.045464932 container health_status 99ee297e6e25b500e7af118e58bbafc761d2fd7202cdfcf4c976c2a99866b5ef (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 23 11:41:18 compute-0 sudo[219758]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-olhagozxxdhfpzleltvxfggymtawxgma ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769168477.9173717-398-20949726462909/AnsiballZ_edpm_container_manage.py'
Jan 23 11:41:18 compute-0 sudo[219758]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:41:18 compute-0 python3[219760]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ceilometer_agent_ipmi config_id=ceilometer_agent_ipmi config_overrides={} config_patterns=*.json containers=['ceilometer_agent_ipmi'] log_base_path=/var/log/containers/stdouts debug=False
Jan 23 11:41:18 compute-0 podman[219798]: 2026-01-23 11:41:18.835341041 +0000 UTC m=+0.050860573 container create adf529ba1b6aae11f18bcfacdd7f5850af0b6e6af2250d4a705be9c346f3f5af (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ceilometer_agent_ipmi, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 23 11:41:18 compute-0 podman[219798]: 2026-01-23 11:41:18.806959538 +0000 UTC m=+0.022479170 image pull a92f7bca491c0b0ce2687db04282e6791be0613adb46862c56450b0e1308679d quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified
Jan 23 11:41:18 compute-0 python3[219760]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ceilometer_agent_ipmi --conmon-pidfile /run/ceilometer_agent_ipmi.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env OS_ENDPOINT_TYPE=internal --env EDPM_CONFIG_HASH=4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d --healthcheck-command /openstack/healthcheck ipmi --label config_id=ceilometer_agent_ipmi --label container_name=ceilometer_agent_ipmi --label managed_by=edpm_ansible --label config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --security-opt label:type:ceilometer_polling_t --user ceilometer --volume /var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z --volume /var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z --volume /etc/hosts:/etc/hosts:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z --volume /dev/log:/dev/log --volume /var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified kolla_start
Jan 23 11:41:18 compute-0 sudo[219758]: pam_unix(sudo:session): session closed for user root
Jan 23 11:41:19 compute-0 sudo[219987]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wwchzettjmhqmzyslwfshkfjntpvxriw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168479.1578057-406-157668331865452/AnsiballZ_stat.py'
Jan 23 11:41:19 compute-0 sudo[219987]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:41:19 compute-0 python3.9[219989]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 11:41:19 compute-0 sudo[219987]: pam_unix(sudo:session): session closed for user root
Jan 23 11:41:20 compute-0 sudo[220141]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-arzhltcgwjvhycnrbfmnxmkwmiadfyxw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168479.8567173-415-4305897687673/AnsiballZ_file.py'
Jan 23 11:41:20 compute-0 sudo[220141]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:41:20 compute-0 python3.9[220143]: ansible-file Invoked with path=/etc/systemd/system/edpm_ceilometer_agent_ipmi.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:41:20 compute-0 sudo[220141]: pam_unix(sudo:session): session closed for user root
Jan 23 11:41:20 compute-0 sudo[220217]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fngwrqelgqinfjtiaecsehwhhiqqogab ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168479.8567173-415-4305897687673/AnsiballZ_stat.py'
Jan 23 11:41:20 compute-0 sudo[220217]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:41:20 compute-0 python3.9[220219]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ceilometer_agent_ipmi_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 11:41:20 compute-0 sudo[220217]: pam_unix(sudo:session): session closed for user root
Jan 23 11:41:21 compute-0 sudo[220368]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wrrkujzlflynbgavjfwykdkwdxfdncla ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168480.739482-415-13450145594519/AnsiballZ_copy.py'
Jan 23 11:41:21 compute-0 sudo[220368]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:41:21 compute-0 python3.9[220370]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769168480.739482-415-13450145594519/source dest=/etc/systemd/system/edpm_ceilometer_agent_ipmi.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:41:21 compute-0 sudo[220368]: pam_unix(sudo:session): session closed for user root
Jan 23 11:41:21 compute-0 sudo[220444]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ikwdqrsbegcrzcvspjtkhxfzpxusaxma ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168480.739482-415-13450145594519/AnsiballZ_systemd.py'
Jan 23 11:41:21 compute-0 sudo[220444]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:41:22 compute-0 python3.9[220446]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 23 11:41:22 compute-0 systemd[1]: Reloading.
Jan 23 11:41:22 compute-0 systemd-rc-local-generator[220471]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 11:41:22 compute-0 systemd-sysv-generator[220476]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 11:41:22 compute-0 sudo[220444]: pam_unix(sudo:session): session closed for user root
Jan 23 11:41:22 compute-0 sudo[220565]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aucbbxidczkbnfaziyrqepbtfaelzmxe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168480.739482-415-13450145594519/AnsiballZ_systemd.py'
Jan 23 11:41:22 compute-0 sudo[220565]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:41:22 compute-0 podman[220528]: 2026-01-23 11:41:22.887081611 +0000 UTC m=+0.067488109 container health_status cde20f10ae383cce1365a41265bac0a75ea71c31a21a1539f187bef9d678e8d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, maintainer=Red Hat, Inc., io.openshift.expose-services=, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, release=1755695350, config_id=openstack_network_exporter, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Jan 23 11:41:23 compute-0 python3.9[220574]: ansible-systemd Invoked with state=restarted name=edpm_ceilometer_agent_ipmi.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 11:41:23 compute-0 systemd[1]: Reloading.
Jan 23 11:41:23 compute-0 systemd-rc-local-generator[220605]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 11:41:23 compute-0 systemd-sysv-generator[220609]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 11:41:23 compute-0 systemd[1]: Starting ceilometer_agent_ipmi container...
Jan 23 11:41:23 compute-0 systemd[1]: Started libcrun container.
Jan 23 11:41:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8362cb5d6ae46828874170b46eacf46cf53ba91416b2e3c6b8b5383b2ee3378/merged/etc/ceilometer/tls supports timestamps until 2038 (0x7fffffff)
Jan 23 11:41:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8362cb5d6ae46828874170b46eacf46cf53ba91416b2e3c6b8b5383b2ee3378/merged/etc/ceilometer/ceilometer_prom_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Jan 23 11:41:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8362cb5d6ae46828874170b46eacf46cf53ba91416b2e3c6b8b5383b2ee3378/merged/var/lib/kolla/config_files/src supports timestamps until 2038 (0x7fffffff)
Jan 23 11:41:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8362cb5d6ae46828874170b46eacf46cf53ba91416b2e3c6b8b5383b2ee3378/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Jan 23 11:41:23 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run adf529ba1b6aae11f18bcfacdd7f5850af0b6e6af2250d4a705be9c346f3f5af.
Jan 23 11:41:23 compute-0 podman[220618]: 2026-01-23 11:41:23.644625062 +0000 UTC m=+0.144612083 container init adf529ba1b6aae11f18bcfacdd7f5850af0b6e6af2250d4a705be9c346f3f5af (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_ipmi, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 23 11:41:23 compute-0 ceilometer_agent_ipmi[220633]: + sudo -E kolla_set_configs
Jan 23 11:41:23 compute-0 sudo[220639]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Jan 23 11:41:23 compute-0 sudo[220639]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Jan 23 11:41:23 compute-0 sudo[220639]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Jan 23 11:41:23 compute-0 podman[220618]: 2026-01-23 11:41:23.672819911 +0000 UTC m=+0.172806922 container start adf529ba1b6aae11f18bcfacdd7f5850af0b6e6af2250d4a705be9c346f3f5af (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi)
Jan 23 11:41:23 compute-0 podman[220618]: ceilometer_agent_ipmi
Jan 23 11:41:23 compute-0 systemd[1]: Started ceilometer_agent_ipmi container.
Jan 23 11:41:23 compute-0 sudo[220565]: pam_unix(sudo:session): session closed for user root
Jan 23 11:41:23 compute-0 ceilometer_agent_ipmi[220633]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 23 11:41:23 compute-0 ceilometer_agent_ipmi[220633]: INFO:__main__:Validating config file
Jan 23 11:41:23 compute-0 ceilometer_agent_ipmi[220633]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 23 11:41:23 compute-0 ceilometer_agent_ipmi[220633]: INFO:__main__:Copying service configuration files
Jan 23 11:41:23 compute-0 ceilometer_agent_ipmi[220633]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Jan 23 11:41:23 compute-0 ceilometer_agent_ipmi[220633]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Jan 23 11:41:23 compute-0 ceilometer_agent_ipmi[220633]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Jan 23 11:41:23 compute-0 ceilometer_agent_ipmi[220633]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Jan 23 11:41:23 compute-0 ceilometer_agent_ipmi[220633]: INFO:__main__:Copying /var/lib/kolla/config_files/src/polling.yaml to /etc/ceilometer/polling.yaml
Jan 23 11:41:23 compute-0 ceilometer_agent_ipmi[220633]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Jan 23 11:41:23 compute-0 ceilometer_agent_ipmi[220633]: INFO:__main__:Copying /var/lib/kolla/config_files/src/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Jan 23 11:41:23 compute-0 ceilometer_agent_ipmi[220633]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Jan 23 11:41:23 compute-0 ceilometer_agent_ipmi[220633]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Jan 23 11:41:23 compute-0 ceilometer_agent_ipmi[220633]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Jan 23 11:41:23 compute-0 ceilometer_agent_ipmi[220633]: INFO:__main__:Writing out command to execute
Jan 23 11:41:23 compute-0 sudo[220639]: pam_unix(sudo:session): session closed for user root
Jan 23 11:41:23 compute-0 ceilometer_agent_ipmi[220633]: ++ cat /run_command
Jan 23 11:41:23 compute-0 ceilometer_agent_ipmi[220633]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Jan 23 11:41:23 compute-0 ceilometer_agent_ipmi[220633]: + ARGS=
Jan 23 11:41:23 compute-0 ceilometer_agent_ipmi[220633]: + sudo kolla_copy_cacerts
Jan 23 11:41:23 compute-0 sudo[220663]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Jan 23 11:41:23 compute-0 sudo[220663]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Jan 23 11:41:23 compute-0 sudo[220663]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Jan 23 11:41:23 compute-0 sudo[220663]: pam_unix(sudo:session): session closed for user root
Jan 23 11:41:23 compute-0 ceilometer_agent_ipmi[220633]: + [[ ! -n '' ]]
Jan 23 11:41:23 compute-0 ceilometer_agent_ipmi[220633]: + . kolla_extend_start
Jan 23 11:41:23 compute-0 ceilometer_agent_ipmi[220633]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Jan 23 11:41:23 compute-0 ceilometer_agent_ipmi[220633]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'\'''
Jan 23 11:41:23 compute-0 ceilometer_agent_ipmi[220633]: + umask 0022
Jan 23 11:41:23 compute-0 ceilometer_agent_ipmi[220633]: + exec /usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout
Jan 23 11:41:23 compute-0 podman[220640]: 2026-01-23 11:41:23.768037206 +0000 UTC m=+0.082247410 container health_status adf529ba1b6aae11f18bcfacdd7f5850af0b6e6af2250d4a705be9c346f3f5af (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 23 11:41:23 compute-0 systemd[1]: adf529ba1b6aae11f18bcfacdd7f5850af0b6e6af2250d4a705be9c346f3f5af-689e231a39b0488d.service: Main process exited, code=exited, status=1/FAILURE
Jan 23 11:41:23 compute-0 systemd[1]: adf529ba1b6aae11f18bcfacdd7f5850af0b6e6af2250d4a705be9c346f3f5af-689e231a39b0488d.service: Failed with result 'exit-code'.
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.576 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:40
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.576 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.576 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.576 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.576 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.576 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.576 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.576 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.576 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.576 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.577 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.577 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.577 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.577 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.577 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.577 2 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.577 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.577 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.577 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.577 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.578 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.578 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.578 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.578 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.578 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.578 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.578 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.578 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.578 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.578 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.578 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.578 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.578 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.579 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.579 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.579 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.579 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.579 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.579 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.579 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.579 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.579 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.579 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.579 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.579 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.579 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.580 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.580 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.580 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.580 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.580 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.580 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.580 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.580 2 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.580 2 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.580 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.580 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.580 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.581 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.581 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.581 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.581 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.581 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.581 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.581 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.581 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.581 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.581 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.581 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.581 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.582 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.582 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.582 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.582 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.582 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.582 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.582 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.582 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.582 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.582 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.582 2 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.582 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.583 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.583 2 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.583 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.583 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.583 2 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.583 2 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.583 2 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.583 2 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.583 2 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.583 2 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.583 2 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.583 2 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.584 2 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.584 2 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.584 2 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.584 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.584 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.584 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.584 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.584 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.584 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.584 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.584 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.584 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.585 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.585 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.585 2 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.585 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.585 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.585 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.585 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.585 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.585 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.585 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.585 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.586 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.586 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.586 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.586 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.586 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.586 2 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.586 2 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.586 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.586 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.586 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.586 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.586 2 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.587 2 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.587 2 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.587 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.587 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.587 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.587 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.587 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.587 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.587 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.587 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.587 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.587 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.587 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.588 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.588 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.588 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.588 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.588 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.588 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.588 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.588 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.588 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.588 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.588 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.588 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.588 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.589 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.589 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.589 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.589 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.589 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.589 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.589 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.589 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.589 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.589 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.606 12 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.607 12 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.608 12 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Jan 23 11:41:24 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:24.697 12 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'ceilometer-rootwrap', '/etc/ceilometer/rootwrap.conf', 'privsep-helper', '--privsep_context', 'ceilometer.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmpd4avazcg/privsep.sock']
Jan 23 11:41:24 compute-0 sudo[220820]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/bin/ceilometer-rootwrap /etc/ceilometer/rootwrap.conf privsep-helper --privsep_context ceilometer.privsep.sys_admin_pctxt --privsep_sock_path /tmp/tmpd4avazcg/privsep.sock
Jan 23 11:41:24 compute-0 sudo[220820]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Jan 23 11:41:24 compute-0 sudo[220820]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Jan 23 11:41:24 compute-0 python3.9[220815]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 23 11:41:25 compute-0 sudo[220820]: pam_unix(sudo:session): session closed for user root
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.311 12 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.312 12 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpd4avazcg/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.204 19 INFO oslo.privsep.daemon [-] privsep daemon starting
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.207 19 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.209 19 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.209 19 INFO oslo.privsep.daemon [-] privsep daemon running as pid 19
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.451 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.current: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.451 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.fan: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.452 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.airflow: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.452 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cpu_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.453 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cups: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.453 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.io_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.453 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.mem_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.453 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.outlet_temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.453 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.power: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.453 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.453 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.temperature: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.453 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.voltage: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.454 12 WARNING ceilometer.polling.manager [-] No valid pollsters can be loaded from ['ipmi'] namespaces
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.456 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:48
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.456 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.456 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.456 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.456 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.456 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.457 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.457 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.457 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.457 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.457 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.457 12 DEBUG cotyledon.oslo_config_glue [-] control_exchange               = ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.457 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.457 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.457 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.457 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.458 12 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.458 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.458 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.458 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.458 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.458 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.458 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.458 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.458 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.458 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.459 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.459 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.459 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.459 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.459 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.459 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.459 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.459 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.459 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.459 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.459 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.459 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.459 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.460 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.460 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.460 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.460 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.460 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.460 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.460 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.460 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.460 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.460 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.460 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.461 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.461 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.461 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.461 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.461 12 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.461 12 DEBUG cotyledon.oslo_config_glue [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.461 12 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.461 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.461 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.461 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.461 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.461 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.462 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.462 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.462 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.462 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.462 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.462 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.462 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.462 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.462 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.462 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.462 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.463 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.463 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.463 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.463 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.463 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.463 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.463 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.463 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.463 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.463 12 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.463 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.463 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.464 12 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.464 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.464 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.464 12 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.464 12 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.464 12 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.464 12 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.464 12 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.464 12 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.464 12 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.464 12 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.465 12 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.465 12 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.465 12 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.465 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.465 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.465 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.465 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.465 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.465 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.465 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.465 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.466 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.466 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.466 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.466 12 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.466 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.466 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.466 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.466 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.466 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.466 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.466 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.467 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.467 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.467 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.467 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.467 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.467 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.467 12 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.467 12 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.467 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.467 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.467 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.467 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.468 12 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.468 12 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.468 12 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.468 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.468 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.468 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.468 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.468 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.468 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.468 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.468 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.468 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.469 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.469 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.469 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.469 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.469 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.469 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.469 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.469 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.469 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.469 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.469 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.470 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.470 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.470 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.470 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.470 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.470 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.470 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.470 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.470 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.470 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.470 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.470 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.471 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.471 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.471 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.471 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.471 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.471 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.471 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.471 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.471 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.471 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.471 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.471 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.472 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.472 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.472 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.472 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.472 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.472 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.472 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.472 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.472 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.472 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.472 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.473 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.473 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.473 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.473 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.473 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.473 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.473 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.473 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.473 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.473 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.473 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.474 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.474 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.474 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.474 12 DEBUG cotyledon._service [-] Run service AgentManager(0) [12] wait_forever /usr/lib/python3.9/site-packages/cotyledon/_service.py:241
Jan 23 11:41:25 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:25.477 12 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['hardware.*']}]} load_config /usr/lib/python3.9/site-packages/ceilometer/agent.py:64
Jan 23 11:41:25 compute-0 sudo[220978]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-giclwpmiqcviyqnnnsmuhdhbrejtikti ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168485.3551328-460-122561298053816/AnsiballZ_stat.py'
Jan 23 11:41:25 compute-0 sudo[220978]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:41:25 compute-0 python3.9[220980]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:41:25 compute-0 sudo[220978]: pam_unix(sudo:session): session closed for user root
Jan 23 11:41:26 compute-0 sudo[221103]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-acksomvpamrtiicpzgrzzrqvtakwacun ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168485.3551328-460-122561298053816/AnsiballZ_copy.py'
Jan 23 11:41:26 compute-0 sudo[221103]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:41:26 compute-0 python3.9[221105]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769168485.3551328-460-122561298053816/.source.yaml _original_basename=.1rnt37a2 follow=False checksum=46c2b923ad2d55ec56f021948ab7d2a33a66f3a1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:41:26 compute-0 sudo[221103]: pam_unix(sudo:session): session closed for user root
Jan 23 11:41:27 compute-0 sudo[221255]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dswkaenxgscxjtypupmkhpuebusqbfcl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168486.8859112-477-39803184262934/AnsiballZ_file.py'
Jan 23 11:41:27 compute-0 sudo[221255]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:41:27 compute-0 python3.9[221257]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:41:27 compute-0 sudo[221255]: pam_unix(sudo:session): session closed for user root
Jan 23 11:41:27 compute-0 sudo[221407]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pkarojcjbkaqcfurtpshbvmgwwyebrun ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168487.5664947-485-44748288973610/AnsiballZ_file.py'
Jan 23 11:41:27 compute-0 sudo[221407]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:41:28 compute-0 python3.9[221409]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 23 11:41:28 compute-0 sudo[221407]: pam_unix(sudo:session): session closed for user root
Jan 23 11:41:28 compute-0 python3.9[221559]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/kepler state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:41:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:41:29.080 106832 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 11:41:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:41:29.082 106832 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 11:41:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:41:29.082 106832 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 11:41:29 compute-0 podman[201022]: time="2026-01-23T11:41:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 23 11:41:29 compute-0 podman[201022]: @ - - [23/Jan/2026:11:41:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 24317 "" "Go-http-client/1.1"
Jan 23 11:41:29 compute-0 podman[201022]: @ - - [23/Jan/2026:11:41:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3425 "" "Go-http-client/1.1"
Jan 23 11:41:30 compute-0 sudo[221980]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rufejedhcytqawgkynkilchoexcmcrev ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168490.3194351-519-227493687058176/AnsiballZ_container_config_data.py'
Jan 23 11:41:30 compute-0 sudo[221980]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:41:30 compute-0 python3.9[221982]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/kepler config_pattern=*.json debug=False
Jan 23 11:41:30 compute-0 sudo[221980]: pam_unix(sudo:session): session closed for user root
Jan 23 11:41:31 compute-0 openstack_network_exporter[204160]: ERROR   11:41:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 23 11:41:31 compute-0 openstack_network_exporter[204160]: 
Jan 23 11:41:31 compute-0 openstack_network_exporter[204160]: ERROR   11:41:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 23 11:41:31 compute-0 openstack_network_exporter[204160]: 
Jan 23 11:41:31 compute-0 sudo[222132]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kxwdndangbpvsjnpifmlnfqafffhbfjb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168491.283925-530-5658869423537/AnsiballZ_container_config_hash.py'
Jan 23 11:41:31 compute-0 sudo[222132]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:41:31 compute-0 podman[222135]: 2026-01-23 11:41:31.772316306 +0000 UTC m=+0.073606388 container health_status 6ec039018dddd109dd56b3f3912ce4a80c166b5fb98c417c5e3cfbbdfbfbeaad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260120, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image)
Jan 23 11:41:31 compute-0 python3.9[222134]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 23 11:41:31 compute-0 sudo[222132]: pam_unix(sudo:session): session closed for user root
Jan 23 11:41:32 compute-0 sudo[222305]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qjkmqdsrqvlbdsdvnqcbbllxwklohbzr ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769168492.2828393-540-119718348653449/AnsiballZ_edpm_container_manage.py'
Jan 23 11:41:32 compute-0 sudo[222305]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:41:32 compute-0 python3[222307]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/kepler config_id=kepler config_overrides={} config_patterns=*.json containers=['kepler'] log_base_path=/var/log/containers/stdouts debug=False
Jan 23 11:41:33 compute-0 podman[222345]: 2026-01-23 11:41:33.05738787 +0000 UTC m=+0.043706958 container create 900ef841977ab427bb05b895d10e0cac749b9185cccc7bb7aaf2b3886aa6449a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, architecture=x86_64, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=kepler, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, com.redhat.component=ubi9-container, build-date=2024-09-18T21:23:30, io.openshift.expose-services=, release-0.7.12=, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., container_name=kepler, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., version=9.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Jan 23 11:41:33 compute-0 podman[222345]: 2026-01-23 11:41:33.033763953 +0000 UTC m=+0.020083061 image pull ed61e3ea3188391c18595d8ceada2a5a01f0ece915c62fde355798735b5208d7 quay.io/sustainable_computing_io/kepler:release-0.7.12
Jan 23 11:41:33 compute-0 python3[222307]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name kepler --conmon-pidfile /run/kepler.pid --env ENABLE_GPU=true --env ENABLE_PROCESS_METRICS=true --env EXPOSE_CONTAINER_METRICS=true --env EXPOSE_ESTIMATED_IDLE_POWER_METRICS=false --env EXPOSE_VM_METRICS=true --env LIBVIRT_METADATA_URI=http://openstack.org/xmlns/libvirt/nova/1.1 --healthcheck-command /openstack/healthcheck kepler --label config_id=kepler --label container_name=kepler --label managed_by=edpm_ansible --label config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 8888:8888 --volume /lib/modules:/lib/modules:ro --volume /run/libvirt:/run/libvirt:shared,ro --volume /sys:/sys --volume /proc:/proc --volume /var/lib/openstack/healthchecks/kepler:/openstack:ro,z quay.io/sustainable_computing_io/kepler:release-0.7.12 -v=2
Jan 23 11:41:33 compute-0 sudo[222305]: pam_unix(sudo:session): session closed for user root
Jan 23 11:41:33 compute-0 sudo[222533]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvjavaostepuogecbmgveltxbdrrszgh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168493.3410623-548-127331039787036/AnsiballZ_stat.py'
Jan 23 11:41:33 compute-0 sudo[222533]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:41:33 compute-0 python3.9[222535]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 11:41:33 compute-0 sudo[222533]: pam_unix(sudo:session): session closed for user root
Jan 23 11:41:34 compute-0 podman[222661]: 2026-01-23 11:41:34.481075739 +0000 UTC m=+0.072055991 container health_status 48bfd3e93cfb033a8917f154ab637a84f3f60f7609564292c230ce848bae7693 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 23 11:41:34 compute-0 sudo[222703]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sbjtnhyudxckcvqjpmjidynnjgauufxr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168494.1189384-557-203127204247159/AnsiballZ_file.py'
Jan 23 11:41:34 compute-0 sudo[222703]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:41:34 compute-0 python3.9[222712]: ansible-file Invoked with path=/etc/systemd/system/edpm_kepler.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:41:34 compute-0 sudo[222703]: pam_unix(sudo:session): session closed for user root
Jan 23 11:41:34 compute-0 sudo[222786]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nxewzsxwtnywthqhwyvmsofniajrtpgm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168494.1189384-557-203127204247159/AnsiballZ_stat.py'
Jan 23 11:41:34 compute-0 sudo[222786]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:41:35 compute-0 python3.9[222788]: ansible-stat Invoked with path=/etc/systemd/system/edpm_kepler_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 11:41:35 compute-0 sudo[222786]: pam_unix(sudo:session): session closed for user root
Jan 23 11:41:35 compute-0 sudo[222937]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ryzwxwxywokoerlisffqnbpfwwstqqhp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168495.2267594-557-243202820197741/AnsiballZ_copy.py'
Jan 23 11:41:35 compute-0 sudo[222937]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:41:35 compute-0 python3.9[222939]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769168495.2267594-557-243202820197741/source dest=/etc/systemd/system/edpm_kepler.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:41:35 compute-0 sudo[222937]: pam_unix(sudo:session): session closed for user root
Jan 23 11:41:36 compute-0 sudo[223028]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gvnnwyqrvsoquopyimnfljjfsvstqrvc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168495.2267594-557-243202820197741/AnsiballZ_systemd.py'
Jan 23 11:41:36 compute-0 sudo[223028]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:41:36 compute-0 podman[222987]: 2026-01-23 11:41:36.238306034 +0000 UTC m=+0.045790769 container health_status d96827cd9c29e53bbdf4cef10942608e4ba405294733072b4aa624c0238e2ed8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Jan 23 11:41:36 compute-0 python3.9[223034]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 23 11:41:36 compute-0 systemd[1]: Reloading.
Jan 23 11:41:36 compute-0 systemd-rc-local-generator[223058]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 11:41:36 compute-0 systemd-sysv-generator[223061]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 11:41:36 compute-0 sudo[223028]: pam_unix(sudo:session): session closed for user root
Jan 23 11:41:37 compute-0 sudo[223143]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dldoqpvjztotwnbpqnkpzjgmkbsvlfct ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168495.2267594-557-243202820197741/AnsiballZ_systemd.py'
Jan 23 11:41:37 compute-0 sudo[223143]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:41:37 compute-0 python3.9[223145]: ansible-systemd Invoked with state=restarted name=edpm_kepler.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 11:41:37 compute-0 systemd[1]: Reloading.
Jan 23 11:41:37 compute-0 systemd-rc-local-generator[223174]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 11:41:37 compute-0 systemd-sysv-generator[223177]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 11:41:37 compute-0 systemd[1]: Starting kepler container...
Jan 23 11:41:37 compute-0 systemd[1]: Started libcrun container.
Jan 23 11:41:37 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 900ef841977ab427bb05b895d10e0cac749b9185cccc7bb7aaf2b3886aa6449a.
Jan 23 11:41:37 compute-0 podman[223184]: 2026-01-23 11:41:37.901654526 +0000 UTC m=+0.115936672 container init 900ef841977ab427bb05b895d10e0cac749b9185cccc7bb7aaf2b3886aa6449a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, com.redhat.component=ubi9-container, build-date=2024-09-18T21:23:30, container_name=kepler, io.buildah.version=1.29.0, managed_by=edpm_ansible, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, release=1214.1726694543, version=9.4, name=ubi9, release-0.7.12=, io.openshift.tags=base rhel9, config_id=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 23 11:41:37 compute-0 kepler[223199]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Jan 23 11:41:37 compute-0 kepler[223199]: I0123 11:41:37.926307       1 exporter.go:103] Kepler running on version: v0.7.12-dirty
Jan 23 11:41:37 compute-0 kepler[223199]: I0123 11:41:37.926442       1 config.go:293] using gCgroup ID in the BPF program: true
Jan 23 11:41:37 compute-0 kepler[223199]: I0123 11:41:37.926697       1 config.go:295] kernel version: 5.14
Jan 23 11:41:37 compute-0 kepler[223199]: I0123 11:41:37.927424       1 power.go:78] Unable to obtain power, use estimate method
Jan 23 11:41:37 compute-0 kepler[223199]: I0123 11:41:37.927449       1 redfish.go:169] failed to get redfish credential file path
Jan 23 11:41:37 compute-0 kepler[223199]: I0123 11:41:37.927882       1 acpi.go:71] Could not find any ACPI power meter path. Is it a VM?
Jan 23 11:41:37 compute-0 kepler[223199]: I0123 11:41:37.927903       1 power.go:79] using none to obtain power
Jan 23 11:41:37 compute-0 kepler[223199]: E0123 11:41:37.927918       1 accelerator.go:154] [DUMMY] doesn't contain GPU
Jan 23 11:41:37 compute-0 kepler[223199]: E0123 11:41:37.928048       1 exporter.go:154] failed to init GPU accelerators: no devices found
Jan 23 11:41:37 compute-0 kepler[223199]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Jan 23 11:41:37 compute-0 kepler[223199]: I0123 11:41:37.929720       1 exporter.go:84] Number of CPUs: 8
Jan 23 11:41:37 compute-0 podman[223184]: 2026-01-23 11:41:37.934901268 +0000 UTC m=+0.149183404 container start 900ef841977ab427bb05b895d10e0cac749b9185cccc7bb7aaf2b3886aa6449a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, managed_by=edpm_ansible, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, build-date=2024-09-18T21:23:30, release=1214.1726694543, config_id=kepler, distribution-scope=public, vcs-type=git, container_name=kepler, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, name=ubi9, release-0.7.12=, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Jan 23 11:41:37 compute-0 podman[223184]: kepler
Jan 23 11:41:37 compute-0 systemd[1]: Started kepler container.
Jan 23 11:41:37 compute-0 sudo[223143]: pam_unix(sudo:session): session closed for user root
Jan 23 11:41:38 compute-0 podman[223209]: 2026-01-23 11:41:38.010998926 +0000 UTC m=+0.067437207 container health_status 900ef841977ab427bb05b895d10e0cac749b9185cccc7bb7aaf2b3886aa6449a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=starting, health_failing_streak=1, health_log=, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., release-0.7.12=, config_id=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, com.redhat.component=ubi9-container, managed_by=edpm_ansible, name=ubi9, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vendor=Red Hat, Inc., version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 23 11:41:38 compute-0 systemd[1]: 900ef841977ab427bb05b895d10e0cac749b9185cccc7bb7aaf2b3886aa6449a-5f458c8f7c81b913.service: Main process exited, code=exited, status=1/FAILURE
Jan 23 11:41:38 compute-0 systemd[1]: 900ef841977ab427bb05b895d10e0cac749b9185cccc7bb7aaf2b3886aa6449a-5f458c8f7c81b913.service: Failed with result 'exit-code'.
Jan 23 11:41:38 compute-0 podman[223246]: 2026-01-23 11:41:38.103146817 +0000 UTC m=+0.071328513 container health_status 1cc877fed4914980324cf4c0d6ba23743fd113442cee4d49cc1a59e402757170 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 23 11:41:38 compute-0 kepler[223199]: I0123 11:41:38.416542       1 watcher.go:83] Using in cluster k8s config
Jan 23 11:41:38 compute-0 kepler[223199]: I0123 11:41:38.417226       1 watcher.go:90] failed to get config: unable to load in-cluster configuration, KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT must be defined
Jan 23 11:41:38 compute-0 kepler[223199]: E0123 11:41:38.418033       1 manager.go:59] could not run the watcher k8s APIserver watcher was not enabled
Jan 23 11:41:38 compute-0 kepler[223199]: I0123 11:41:38.421903       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_TOTAL Power
Jan 23 11:41:38 compute-0 kepler[223199]: I0123 11:41:38.421931       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms]
Jan 23 11:41:38 compute-0 kepler[223199]: I0123 11:41:38.425456       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_COMPONENTS Power
Jan 23 11:41:38 compute-0 kepler[223199]: I0123 11:41:38.425481       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms bpf_cpu_time_ms bpf_cpu_time_ms   gpu_compute_util]
Jan 23 11:41:38 compute-0 kepler[223199]: I0123 11:41:38.432484       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Jan 23 11:41:38 compute-0 kepler[223199]: I0123 11:41:38.432519       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Jan 23 11:41:38 compute-0 kepler[223199]: I0123 11:41:38.432532       1 node_platform_energy.go:53] Using the Regressor/AbsPower Power Model to estimate Node Platform Power
Jan 23 11:41:38 compute-0 kepler[223199]: I0123 11:41:38.438814       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Jan 23 11:41:38 compute-0 kepler[223199]: I0123 11:41:38.438850       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Jan 23 11:41:38 compute-0 kepler[223199]: I0123 11:41:38.438855       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Jan 23 11:41:38 compute-0 kepler[223199]: I0123 11:41:38.438860       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Jan 23 11:41:38 compute-0 kepler[223199]: I0123 11:41:38.438867       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Jan 23 11:41:38 compute-0 kepler[223199]: I0123 11:41:38.438880       1 node_component_energy.go:57] Using the Regressor/AbsPower Power Model to estimate Node Component Power
Jan 23 11:41:38 compute-0 kepler[223199]: I0123 11:41:38.438967       1 prometheus_collector.go:90] Registered Process Prometheus metrics
Jan 23 11:41:38 compute-0 kepler[223199]: I0123 11:41:38.438993       1 prometheus_collector.go:95] Registered Container Prometheus metrics
Jan 23 11:41:38 compute-0 kepler[223199]: I0123 11:41:38.439033       1 prometheus_collector.go:100] Registered VM Prometheus metrics
Jan 23 11:41:38 compute-0 kepler[223199]: I0123 11:41:38.439051       1 prometheus_collector.go:104] Registered Node Prometheus metrics
Jan 23 11:41:38 compute-0 kepler[223199]: I0123 11:41:38.439174       1 exporter.go:194] starting to listen on 0.0.0.0:8888
Jan 23 11:41:38 compute-0 kepler[223199]: I0123 11:41:38.439501       1 exporter.go:208] Started Kepler in 513.418888ms
Jan 23 11:41:38 compute-0 python3.9[223415]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 23 11:41:40 compute-0 sudo[223565]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ecqfxrsqqzcwuvuwulidwjfpckmvxaxc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168499.667945-602-238835117560917/AnsiballZ_stat.py'
Jan 23 11:41:40 compute-0 sudo[223565]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:41:40 compute-0 python3.9[223567]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:41:40 compute-0 sudo[223565]: pam_unix(sudo:session): session closed for user root
Jan 23 11:41:40 compute-0 sudo[223690]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kyfetwzxncgniomhylelppyumjafbfnl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168499.667945-602-238835117560917/AnsiballZ_copy.py'
Jan 23 11:41:40 compute-0 sudo[223690]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:41:40 compute-0 python3.9[223692]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769168499.667945-602-238835117560917/.source.yaml _original_basename=.hfr2f19p follow=False checksum=ca66399973f6d58a5bbac4b420df7ce98873b151 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:41:41 compute-0 sudo[223690]: pam_unix(sudo:session): session closed for user root
Jan 23 11:41:41 compute-0 sudo[223842]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mkvdxfovntorkwupnabeflbwvybzweum ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168501.2324631-617-54197153159322/AnsiballZ_systemd.py'
Jan 23 11:41:41 compute-0 sudo[223842]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:41:41 compute-0 python3.9[223844]: ansible-ansible.builtin.systemd Invoked with name=edpm_ceilometer_agent_ipmi.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 23 11:41:41 compute-0 systemd[1]: Stopping ceilometer_agent_ipmi container...
Jan 23 11:41:42 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:42.064 2 INFO cotyledon._service_manager [-] Caught SIGTERM signal, graceful exiting of master process
Jan 23 11:41:42 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:42.166 2 DEBUG cotyledon._service_manager [-] Killing services with signal SIGTERM _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:304
Jan 23 11:41:42 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:42.166 2 DEBUG cotyledon._service_manager [-] Waiting services to terminate _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:308
Jan 23 11:41:42 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:42.167 12 INFO cotyledon._service [-] Caught SIGTERM signal, graceful exiting of service AgentManager(0) [12]
Jan 23 11:41:42 compute-0 ceilometer_agent_ipmi[220633]: 2026-01-23 11:41:42.177 2 DEBUG cotyledon._service_manager [-] Shutdown finish _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:320
Jan 23 11:41:42 compute-0 systemd[1]: libpod-adf529ba1b6aae11f18bcfacdd7f5850af0b6e6af2250d4a705be9c346f3f5af.scope: Deactivated successfully.
Jan 23 11:41:42 compute-0 systemd[1]: libpod-adf529ba1b6aae11f18bcfacdd7f5850af0b6e6af2250d4a705be9c346f3f5af.scope: Consumed 2.066s CPU time.
Jan 23 11:41:42 compute-0 podman[223848]: 2026-01-23 11:41:42.319473388 +0000 UTC m=+0.316078981 container died adf529ba1b6aae11f18bcfacdd7f5850af0b6e6af2250d4a705be9c346f3f5af (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 23 11:41:42 compute-0 systemd[1]: adf529ba1b6aae11f18bcfacdd7f5850af0b6e6af2250d4a705be9c346f3f5af-689e231a39b0488d.timer: Deactivated successfully.
Jan 23 11:41:42 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run adf529ba1b6aae11f18bcfacdd7f5850af0b6e6af2250d4a705be9c346f3f5af.
Jan 23 11:41:42 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-adf529ba1b6aae11f18bcfacdd7f5850af0b6e6af2250d4a705be9c346f3f5af-userdata-shm.mount: Deactivated successfully.
Jan 23 11:41:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-b8362cb5d6ae46828874170b46eacf46cf53ba91416b2e3c6b8b5383b2ee3378-merged.mount: Deactivated successfully.
Jan 23 11:41:42 compute-0 podman[223848]: 2026-01-23 11:41:42.381906872 +0000 UTC m=+0.378512465 container cleanup adf529ba1b6aae11f18bcfacdd7f5850af0b6e6af2250d4a705be9c346f3f5af (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 23 11:41:42 compute-0 podman[223848]: ceilometer_agent_ipmi
Jan 23 11:41:42 compute-0 podman[223877]: ceilometer_agent_ipmi
Jan 23 11:41:42 compute-0 systemd[1]: edpm_ceilometer_agent_ipmi.service: Deactivated successfully.
Jan 23 11:41:42 compute-0 systemd[1]: Stopped ceilometer_agent_ipmi container.
Jan 23 11:41:42 compute-0 systemd[1]: Starting ceilometer_agent_ipmi container...
Jan 23 11:41:42 compute-0 systemd[1]: Started libcrun container.
Jan 23 11:41:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8362cb5d6ae46828874170b46eacf46cf53ba91416b2e3c6b8b5383b2ee3378/merged/etc/ceilometer/tls supports timestamps until 2038 (0x7fffffff)
Jan 23 11:41:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8362cb5d6ae46828874170b46eacf46cf53ba91416b2e3c6b8b5383b2ee3378/merged/etc/ceilometer/ceilometer_prom_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Jan 23 11:41:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8362cb5d6ae46828874170b46eacf46cf53ba91416b2e3c6b8b5383b2ee3378/merged/var/lib/kolla/config_files/src supports timestamps until 2038 (0x7fffffff)
Jan 23 11:41:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8362cb5d6ae46828874170b46eacf46cf53ba91416b2e3c6b8b5383b2ee3378/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Jan 23 11:41:42 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run adf529ba1b6aae11f18bcfacdd7f5850af0b6e6af2250d4a705be9c346f3f5af.
Jan 23 11:41:42 compute-0 podman[223888]: 2026-01-23 11:41:42.640097298 +0000 UTC m=+0.142995083 container init adf529ba1b6aae11f18bcfacdd7f5850af0b6e6af2250d4a705be9c346f3f5af (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 23 11:41:42 compute-0 ceilometer_agent_ipmi[223902]: + sudo -E kolla_set_configs
Jan 23 11:41:42 compute-0 podman[223888]: 2026-01-23 11:41:42.665228362 +0000 UTC m=+0.168126147 container start adf529ba1b6aae11f18bcfacdd7f5850af0b6e6af2250d4a705be9c346f3f5af (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, tcib_managed=true, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 23 11:41:42 compute-0 podman[223888]: ceilometer_agent_ipmi
Jan 23 11:41:42 compute-0 sudo[223908]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Jan 23 11:41:42 compute-0 sudo[223908]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Jan 23 11:41:42 compute-0 systemd[1]: Started ceilometer_agent_ipmi container.
Jan 23 11:41:42 compute-0 sudo[223908]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Jan 23 11:41:42 compute-0 sudo[223842]: pam_unix(sudo:session): session closed for user root
Jan 23 11:41:42 compute-0 ceilometer_agent_ipmi[223902]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 23 11:41:42 compute-0 ceilometer_agent_ipmi[223902]: INFO:__main__:Validating config file
Jan 23 11:41:42 compute-0 ceilometer_agent_ipmi[223902]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 23 11:41:42 compute-0 ceilometer_agent_ipmi[223902]: INFO:__main__:Copying service configuration files
Jan 23 11:41:42 compute-0 ceilometer_agent_ipmi[223902]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Jan 23 11:41:42 compute-0 ceilometer_agent_ipmi[223902]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Jan 23 11:41:42 compute-0 ceilometer_agent_ipmi[223902]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Jan 23 11:41:42 compute-0 ceilometer_agent_ipmi[223902]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Jan 23 11:41:42 compute-0 ceilometer_agent_ipmi[223902]: INFO:__main__:Copying /var/lib/kolla/config_files/src/polling.yaml to /etc/ceilometer/polling.yaml
Jan 23 11:41:42 compute-0 ceilometer_agent_ipmi[223902]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Jan 23 11:41:42 compute-0 ceilometer_agent_ipmi[223902]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Jan 23 11:41:42 compute-0 ceilometer_agent_ipmi[223902]: INFO:__main__:Copying /var/lib/kolla/config_files/src/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Jan 23 11:41:42 compute-0 ceilometer_agent_ipmi[223902]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Jan 23 11:41:42 compute-0 ceilometer_agent_ipmi[223902]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Jan 23 11:41:42 compute-0 ceilometer_agent_ipmi[223902]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Jan 23 11:41:42 compute-0 ceilometer_agent_ipmi[223902]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Jan 23 11:41:42 compute-0 ceilometer_agent_ipmi[223902]: INFO:__main__:Writing out command to execute
Jan 23 11:41:42 compute-0 sudo[223908]: pam_unix(sudo:session): session closed for user root
Jan 23 11:41:42 compute-0 ceilometer_agent_ipmi[223902]: ++ cat /run_command
Jan 23 11:41:42 compute-0 ceilometer_agent_ipmi[223902]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Jan 23 11:41:42 compute-0 ceilometer_agent_ipmi[223902]: + ARGS=
Jan 23 11:41:42 compute-0 ceilometer_agent_ipmi[223902]: + sudo kolla_copy_cacerts
Jan 23 11:41:42 compute-0 podman[223909]: 2026-01-23 11:41:42.748056465 +0000 UTC m=+0.066879645 container health_status adf529ba1b6aae11f18bcfacdd7f5850af0b6e6af2250d4a705be9c346f3f5af (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=starting, health_failing_streak=1, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 23 11:41:42 compute-0 sudo[223935]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Jan 23 11:41:42 compute-0 sudo[223935]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Jan 23 11:41:42 compute-0 sudo[223935]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Jan 23 11:41:42 compute-0 systemd[1]: adf529ba1b6aae11f18bcfacdd7f5850af0b6e6af2250d4a705be9c346f3f5af-71511985ccc5dbd9.service: Main process exited, code=exited, status=1/FAILURE
Jan 23 11:41:42 compute-0 systemd[1]: adf529ba1b6aae11f18bcfacdd7f5850af0b6e6af2250d4a705be9c346f3f5af-71511985ccc5dbd9.service: Failed with result 'exit-code'.
Jan 23 11:41:42 compute-0 sudo[223935]: pam_unix(sudo:session): session closed for user root
Jan 23 11:41:42 compute-0 ceilometer_agent_ipmi[223902]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Jan 23 11:41:42 compute-0 ceilometer_agent_ipmi[223902]: + [[ ! -n '' ]]
Jan 23 11:41:42 compute-0 ceilometer_agent_ipmi[223902]: + . kolla_extend_start
Jan 23 11:41:42 compute-0 ceilometer_agent_ipmi[223902]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'\'''
Jan 23 11:41:42 compute-0 ceilometer_agent_ipmi[223902]: + umask 0022
Jan 23 11:41:42 compute-0 ceilometer_agent_ipmi[223902]: + exec /usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout
Jan 23 11:41:43 compute-0 sudo[224083]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wpojotodjmgyvlzvbxcfjxguzesvbwlk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168502.925367-625-114642039385224/AnsiballZ_systemd.py'
Jan 23 11:41:43 compute-0 sudo[224083]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.617 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:40
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.617 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.618 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.618 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.618 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.618 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.618 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.618 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.618 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.618 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.618 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.618 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.619 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.619 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.619 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.619 2 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.619 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.619 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.619 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.619 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.619 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.619 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.620 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.620 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:43 compute-0 python3.9[224085]: ansible-ansible.builtin.systemd Invoked with name=edpm_kepler.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.620 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.620 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.620 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.620 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.620 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.620 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.620 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.620 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.620 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.620 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.620 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.621 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.621 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.621 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.621 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.621 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.621 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.621 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.621 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.621 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.621 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.621 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.622 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.622 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.622 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.622 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.622 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.622 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.622 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.622 2 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.622 2 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.622 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.622 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.623 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.623 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.623 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.623 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.623 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.623 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.623 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.623 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.623 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.624 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.624 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.624 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.624 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.624 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.624 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.624 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.624 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.625 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.625 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.625 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.625 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.625 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.625 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.625 2 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.625 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.625 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.625 2 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.626 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.626 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.626 2 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.626 2 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.626 2 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.626 2 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.626 2 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.626 2 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.626 2 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.626 2 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.627 2 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.627 2 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.627 2 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.627 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.627 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.627 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.627 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.627 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.628 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.628 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.628 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.628 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.628 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.628 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.628 2 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.628 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.628 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.628 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.628 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.629 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.629 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.629 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.629 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.629 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.629 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.629 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.629 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.629 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.629 2 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.630 2 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.630 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.630 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.630 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.630 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.630 2 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.630 2 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.630 2 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.630 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.630 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.631 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.631 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.631 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.631 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.631 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.631 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.631 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.631 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.631 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.631 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.631 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.632 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.632 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.632 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.632 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.632 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.632 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.632 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.632 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.632 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.632 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.632 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.633 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.633 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.633 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.633 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.633 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.633 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.633 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.633 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.633 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.633 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.651 12 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.652 12 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.653 12 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Jan 23 11:41:43 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:43.665 12 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'ceilometer-rootwrap', '/etc/ceilometer/rootwrap.conf', 'privsep-helper', '--privsep_context', 'ceilometer.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmp1zq_sriq/privsep.sock']
Jan 23 11:41:43 compute-0 systemd[1]: Stopping kepler container...
Jan 23 11:41:43 compute-0 sudo[224094]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/bin/ceilometer-rootwrap /etc/ceilometer/rootwrap.conf privsep-helper --privsep_context ceilometer.privsep.sys_admin_pctxt --privsep_sock_path /tmp/tmp1zq_sriq/privsep.sock
Jan 23 11:41:43 compute-0 sudo[224094]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Jan 23 11:41:43 compute-0 sudo[224094]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Jan 23 11:41:43 compute-0 kepler[223199]: I0123 11:41:43.727095       1 exporter.go:218] Received shutdown signal
Jan 23 11:41:43 compute-0 kepler[223199]: I0123 11:41:43.727872       1 exporter.go:226] Exiting...
Jan 23 11:41:43 compute-0 systemd[1]: libpod-900ef841977ab427bb05b895d10e0cac749b9185cccc7bb7aaf2b3886aa6449a.scope: Deactivated successfully.
Jan 23 11:41:43 compute-0 podman[224092]: 2026-01-23 11:41:43.915497316 +0000 UTC m=+0.234166900 container died 900ef841977ab427bb05b895d10e0cac749b9185cccc7bb7aaf2b3886aa6449a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.expose-services=, io.openshift.tags=base rhel9, io.buildah.version=1.29.0, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, container_name=kepler, release=1214.1726694543, maintainer=Red Hat, Inc., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, config_id=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, architecture=x86_64, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 23 11:41:43 compute-0 systemd[1]: 900ef841977ab427bb05b895d10e0cac749b9185cccc7bb7aaf2b3886aa6449a-5f458c8f7c81b913.timer: Deactivated successfully.
Jan 23 11:41:43 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run 900ef841977ab427bb05b895d10e0cac749b9185cccc7bb7aaf2b3886aa6449a.
Jan 23 11:41:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-9bebdd70f70037e674abe880d5d5e577fbcf5fd2eeebaa2f8200e6915053125c-merged.mount: Deactivated successfully.
Jan 23 11:41:43 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-900ef841977ab427bb05b895d10e0cac749b9185cccc7bb7aaf2b3886aa6449a-userdata-shm.mount: Deactivated successfully.
Jan 23 11:41:43 compute-0 podman[224092]: 2026-01-23 11:41:43.951936695 +0000 UTC m=+0.270606279 container cleanup 900ef841977ab427bb05b895d10e0cac749b9185cccc7bb7aaf2b3886aa6449a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, build-date=2024-09-18T21:23:30, config_id=kepler, release=1214.1726694543, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, maintainer=Red Hat, Inc., version=9.4, io.openshift.expose-services=, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, architecture=x86_64, distribution-scope=public, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, container_name=kepler, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Jan 23 11:41:43 compute-0 podman[224092]: kepler
Jan 23 11:41:44 compute-0 podman[224124]: kepler
Jan 23 11:41:44 compute-0 systemd[1]: edpm_kepler.service: Deactivated successfully.
Jan 23 11:41:44 compute-0 systemd[1]: Stopped kepler container.
Jan 23 11:41:44 compute-0 systemd[1]: Starting kepler container...
Jan 23 11:41:44 compute-0 systemd[1]: Started libcrun container.
Jan 23 11:41:44 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 900ef841977ab427bb05b895d10e0cac749b9185cccc7bb7aaf2b3886aa6449a.
Jan 23 11:41:44 compute-0 podman[224137]: 2026-01-23 11:41:44.145682137 +0000 UTC m=+0.106810550 container init 900ef841977ab427bb05b895d10e0cac749b9185cccc7bb7aaf2b3886aa6449a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, config_id=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, name=ubi9, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., release=1214.1726694543, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, container_name=kepler, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, managed_by=edpm_ansible, distribution-scope=public, io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Jan 23 11:41:44 compute-0 kepler[224153]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Jan 23 11:41:44 compute-0 kepler[224153]: I0123 11:41:44.175899       1 exporter.go:103] Kepler running on version: v0.7.12-dirty
Jan 23 11:41:44 compute-0 kepler[224153]: I0123 11:41:44.176031       1 config.go:293] using gCgroup ID in the BPF program: true
Jan 23 11:41:44 compute-0 kepler[224153]: I0123 11:41:44.176049       1 config.go:295] kernel version: 5.14
Jan 23 11:41:44 compute-0 kepler[224153]: I0123 11:41:44.177312       1 power.go:78] Unable to obtain power, use estimate method
Jan 23 11:41:44 compute-0 podman[224137]: 2026-01-23 11:41:44.177515354 +0000 UTC m=+0.138643747 container start 900ef841977ab427bb05b895d10e0cac749b9185cccc7bb7aaf2b3886aa6449a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., version=9.4, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.buildah.version=1.29.0, io.openshift.expose-services=, vcs-type=git, config_id=kepler, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9, maintainer=Red Hat, Inc., managed_by=edpm_ansible, io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, container_name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 23 11:41:44 compute-0 kepler[224153]: I0123 11:41:44.177338       1 redfish.go:169] failed to get redfish credential file path
Jan 23 11:41:44 compute-0 kepler[224153]: I0123 11:41:44.177874       1 acpi.go:71] Could not find any ACPI power meter path. Is it a VM?
Jan 23 11:41:44 compute-0 kepler[224153]: I0123 11:41:44.177880       1 power.go:79] using none to obtain power
Jan 23 11:41:44 compute-0 kepler[224153]: E0123 11:41:44.177894       1 accelerator.go:154] [DUMMY] doesn't contain GPU
Jan 23 11:41:44 compute-0 kepler[224153]: E0123 11:41:44.177916       1 exporter.go:154] failed to init GPU accelerators: no devices found
Jan 23 11:41:44 compute-0 kepler[224153]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Jan 23 11:41:44 compute-0 kepler[224153]: I0123 11:41:44.179448       1 exporter.go:84] Number of CPUs: 8
Jan 23 11:41:44 compute-0 podman[224137]: kepler
Jan 23 11:41:44 compute-0 systemd[1]: Started kepler container.
Jan 23 11:41:44 compute-0 sudo[224083]: pam_unix(sudo:session): session closed for user root
Jan 23 11:41:44 compute-0 podman[224163]: 2026-01-23 11:41:44.248383785 +0000 UTC m=+0.058815798 container health_status 900ef841977ab427bb05b895d10e0cac749b9185cccc7bb7aaf2b3886aa6449a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=starting, health_failing_streak=1, health_log=, build-date=2024-09-18T21:23:30, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, distribution-scope=public, architecture=x86_64, vcs-type=git, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, com.redhat.component=ubi9-container, container_name=kepler, release=1214.1726694543, vendor=Red Hat, Inc., config_id=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, version=9.4, maintainer=Red Hat, Inc., io.openshift.tags=base rhel9, name=ubi9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 23 11:41:44 compute-0 systemd[1]: 900ef841977ab427bb05b895d10e0cac749b9185cccc7bb7aaf2b3886aa6449a-14955b3f8b4fd4dc.service: Main process exited, code=exited, status=1/FAILURE
Jan 23 11:41:44 compute-0 systemd[1]: 900ef841977ab427bb05b895d10e0cac749b9185cccc7bb7aaf2b3886aa6449a-14955b3f8b4fd4dc.service: Failed with result 'exit-code'.
Jan 23 11:41:44 compute-0 sudo[224094]: pam_unix(sudo:session): session closed for user root
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.278 12 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.279 12 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp1zq_sriq/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.171 19 INFO oslo.privsep.daemon [-] privsep daemon starting
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.175 19 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.180 19 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.180 19 INFO oslo.privsep.daemon [-] privsep daemon running as pid 19
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.375 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.current: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.376 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.fan: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.377 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.airflow: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.377 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cpu_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.377 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cups: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.377 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.io_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.378 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.mem_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.378 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.outlet_temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.378 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.power: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.378 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.378 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.temperature: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.378 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.voltage: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.378 12 WARNING ceilometer.polling.manager [-] No valid pollsters can be loaded from ['ipmi'] namespaces
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.383 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:48
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.383 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.384 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.384 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.384 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.384 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.384 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.384 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.384 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.384 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.384 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.384 12 DEBUG cotyledon.oslo_config_glue [-] control_exchange               = ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.385 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.385 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.385 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.385 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.385 12 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.386 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.386 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.386 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.386 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.386 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.386 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.386 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.386 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.386 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.387 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.387 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.387 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.387 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.387 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.387 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.387 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.387 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.387 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.387 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.388 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.388 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.388 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.388 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.388 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.388 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.388 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.388 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.388 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.389 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.389 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.389 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.389 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.389 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.389 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.389 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.389 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.390 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.390 12 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.390 12 DEBUG cotyledon.oslo_config_glue [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.390 12 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.390 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.390 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.390 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.390 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.390 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.391 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.391 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.391 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.391 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.391 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.391 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.391 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.391 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.391 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.392 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.392 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.392 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.392 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.392 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.392 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.392 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.392 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.392 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.393 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.393 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.393 12 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.393 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.393 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.393 12 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.393 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.393 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.393 12 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.394 12 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.394 12 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.394 12 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.394 12 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.394 12 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.394 12 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.394 12 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.394 12 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.394 12 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.395 12 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.395 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.395 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.395 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.395 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.395 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.395 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.395 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.396 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.396 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.396 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.396 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.396 12 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.396 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.396 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.396 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.396 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.397 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.397 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.397 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.397 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.397 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.397 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.397 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.397 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.397 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.397 12 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.398 12 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.398 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.398 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.398 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.398 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.398 12 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.398 12 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.398 12 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.398 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.398 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.399 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.399 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.399 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.399 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.399 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.399 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.399 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.399 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.399 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.399 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.399 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.400 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.400 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.400 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.400 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.400 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.400 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.400 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.400 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.400 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.400 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.400 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.401 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.401 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.401 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.401 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.401 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.401 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.401 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.401 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.401 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.401 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.402 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.402 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.402 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.402 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.402 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.402 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.402 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.402 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.402 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.402 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.402 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.403 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.403 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.403 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.403 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.403 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.403 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.403 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.403 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.403 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.403 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.404 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.404 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.404 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.404 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.404 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.404 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.404 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.404 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.404 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.404 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.404 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.405 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.405 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.405 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.405 12 DEBUG cotyledon._service [-] Run service AgentManager(0) [12] wait_forever /usr/lib/python3.9/site-packages/cotyledon/_service.py:241
Jan 23 11:41:44 compute-0 ceilometer_agent_ipmi[223902]: 2026-01-23 11:41:44.407 12 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['hardware.*']}]} load_config /usr/lib/python3.9/site-packages/ceilometer/agent.py:64
Jan 23 11:41:44 compute-0 kepler[224153]: I0123 11:41:44.596212       1 watcher.go:83] Using in cluster k8s config
Jan 23 11:41:44 compute-0 kepler[224153]: I0123 11:41:44.596253       1 watcher.go:90] failed to get config: unable to load in-cluster configuration, KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT must be defined
Jan 23 11:41:44 compute-0 kepler[224153]: E0123 11:41:44.596334       1 manager.go:59] could not run the watcher k8s APIserver watcher was not enabled
Jan 23 11:41:44 compute-0 kepler[224153]: I0123 11:41:44.600631       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_TOTAL Power
Jan 23 11:41:44 compute-0 kepler[224153]: I0123 11:41:44.600668       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms]
Jan 23 11:41:44 compute-0 kepler[224153]: I0123 11:41:44.603791       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_COMPONENTS Power
Jan 23 11:41:44 compute-0 kepler[224153]: I0123 11:41:44.603818       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms bpf_cpu_time_ms bpf_cpu_time_ms   gpu_compute_util]
Jan 23 11:41:44 compute-0 kepler[224153]: I0123 11:41:44.613665       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Jan 23 11:41:44 compute-0 kepler[224153]: I0123 11:41:44.613701       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Jan 23 11:41:44 compute-0 kepler[224153]: I0123 11:41:44.613714       1 node_platform_energy.go:53] Using the Regressor/AbsPower Power Model to estimate Node Platform Power
Jan 23 11:41:44 compute-0 kepler[224153]: I0123 11:41:44.625880       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Jan 23 11:41:44 compute-0 kepler[224153]: I0123 11:41:44.626464       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Jan 23 11:41:44 compute-0 kepler[224153]: I0123 11:41:44.626772       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Jan 23 11:41:44 compute-0 kepler[224153]: I0123 11:41:44.627058       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Jan 23 11:41:44 compute-0 kepler[224153]: I0123 11:41:44.627376       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Jan 23 11:41:44 compute-0 kepler[224153]: I0123 11:41:44.627674       1 node_component_energy.go:57] Using the Regressor/AbsPower Power Model to estimate Node Component Power
Jan 23 11:41:44 compute-0 kepler[224153]: I0123 11:41:44.628101       1 prometheus_collector.go:90] Registered Process Prometheus metrics
Jan 23 11:41:44 compute-0 kepler[224153]: I0123 11:41:44.628622       1 prometheus_collector.go:95] Registered Container Prometheus metrics
Jan 23 11:41:44 compute-0 kepler[224153]: I0123 11:41:44.628937       1 prometheus_collector.go:100] Registered VM Prometheus metrics
Jan 23 11:41:44 compute-0 kepler[224153]: I0123 11:41:44.629601       1 prometheus_collector.go:104] Registered Node Prometheus metrics
Jan 23 11:41:44 compute-0 kepler[224153]: I0123 11:41:44.629991       1 exporter.go:194] starting to listen on 0.0.0.0:8888
Jan 23 11:41:44 compute-0 kepler[224153]: I0123 11:41:44.631542       1 exporter.go:208] Started Kepler in 455.832672ms
Jan 23 11:41:44 compute-0 sudo[224354]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sgmzswtrkyaxcjvqbmkrrghugatxbyrp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168504.4217393-633-112868898042789/AnsiballZ_find.py'
Jan 23 11:41:44 compute-0 sudo[224354]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:41:44 compute-0 python3.9[224356]: ansible-ansible.builtin.find Invoked with file_type=directory paths=['/var/lib/openstack/healthchecks/'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 23 11:41:44 compute-0 sudo[224354]: pam_unix(sudo:session): session closed for user root
Jan 23 11:41:46 compute-0 sudo[224506]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pjvspritkymaopyllbnabgezrwszakfp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168505.4789116-643-67492545670251/AnsiballZ_podman_container_info.py'
Jan 23 11:41:46 compute-0 sudo[224506]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:41:46 compute-0 python3.9[224508]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_controller'] executable=podman
Jan 23 11:41:46 compute-0 sudo[224506]: pam_unix(sudo:session): session closed for user root
Jan 23 11:41:47 compute-0 sudo[224670]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ycoaapynyvudexovfvbycyjmnhwdsfyl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168506.541597-651-133510660605745/AnsiballZ_podman_container_exec.py'
Jan 23 11:41:47 compute-0 sudo[224670]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:41:47 compute-0 python3.9[224672]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 23 11:41:47 compute-0 systemd[1]: Started libpod-conmon-1cc877fed4914980324cf4c0d6ba23743fd113442cee4d49cc1a59e402757170.scope.
Jan 23 11:41:47 compute-0 podman[224673]: 2026-01-23 11:41:47.444603454 +0000 UTC m=+0.103816637 container exec 1cc877fed4914980324cf4c0d6ba23743fd113442cee4d49cc1a59e402757170 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 23 11:41:47 compute-0 podman[224673]: 2026-01-23 11:41:47.477874606 +0000 UTC m=+0.137087749 container exec_died 1cc877fed4914980324cf4c0d6ba23743fd113442cee4d49cc1a59e402757170 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller)
Jan 23 11:41:47 compute-0 systemd[1]: libpod-conmon-1cc877fed4914980324cf4c0d6ba23743fd113442cee4d49cc1a59e402757170.scope: Deactivated successfully.
Jan 23 11:41:47 compute-0 sudo[224670]: pam_unix(sudo:session): session closed for user root
Jan 23 11:41:48 compute-0 podman[224830]: 2026-01-23 11:41:48.221553107 +0000 UTC m=+0.081096921 container health_status 99ee297e6e25b500e7af118e58bbafc761d2fd7202cdfcf4c976c2a99866b5ef (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 23 11:41:48 compute-0 sudo[224871]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oyaodnxrnhvqmuesnolsuafchuwledkd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168507.7715034-659-97067151653594/AnsiballZ_podman_container_exec.py'
Jan 23 11:41:48 compute-0 sudo[224871]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:41:48 compute-0 python3.9[224880]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 23 11:41:48 compute-0 systemd[1]: Started libpod-conmon-1cc877fed4914980324cf4c0d6ba23743fd113442cee4d49cc1a59e402757170.scope.
Jan 23 11:41:48 compute-0 podman[224882]: 2026-01-23 11:41:48.558069446 +0000 UTC m=+0.086820181 container exec 1cc877fed4914980324cf4c0d6ba23743fd113442cee4d49cc1a59e402757170 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 23 11:41:48 compute-0 podman[224882]: 2026-01-23 11:41:48.589582496 +0000 UTC m=+0.118333221 container exec_died 1cc877fed4914980324cf4c0d6ba23743fd113442cee4d49cc1a59e402757170 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Jan 23 11:41:48 compute-0 systemd[1]: libpod-conmon-1cc877fed4914980324cf4c0d6ba23743fd113442cee4d49cc1a59e402757170.scope: Deactivated successfully.
Jan 23 11:41:48 compute-0 sudo[224871]: pam_unix(sudo:session): session closed for user root
Jan 23 11:41:49 compute-0 sudo[225061]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qmtpbirlamsogtytpjjrhlsqjyqvkcwj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168508.867737-667-225158529428129/AnsiballZ_file.py'
Jan 23 11:41:49 compute-0 sudo[225061]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:41:49 compute-0 python3.9[225063]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_controller recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:41:49 compute-0 sudo[225061]: pam_unix(sudo:session): session closed for user root
Jan 23 11:41:50 compute-0 sudo[225214]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tppddhyvcyjksnkftqwljvfzdqqbywrc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168509.774995-676-196320418665992/AnsiballZ_podman_container_info.py'
Jan 23 11:41:50 compute-0 sudo[225214]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:41:50 compute-0 python3.9[225216]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_metadata_agent'] executable=podman
Jan 23 11:41:50 compute-0 sudo[225214]: pam_unix(sudo:session): session closed for user root
Jan 23 11:41:51 compute-0 sudo[225378]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ewombazadfjrbumngaltfbaknyjnnwdt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168510.8487864-684-70450776114518/AnsiballZ_podman_container_exec.py'
Jan 23 11:41:51 compute-0 sudo[225378]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:41:51 compute-0 python3.9[225380]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 23 11:41:51 compute-0 systemd[1]: Started libpod-conmon-d96827cd9c29e53bbdf4cef10942608e4ba405294733072b4aa624c0238e2ed8.scope.
Jan 23 11:41:51 compute-0 podman[225381]: 2026-01-23 11:41:51.764712998 +0000 UTC m=+0.139893597 container exec d96827cd9c29e53bbdf4cef10942608e4ba405294733072b4aa624c0238e2ed8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent)
Jan 23 11:41:51 compute-0 podman[225381]: 2026-01-23 11:41:51.799716783 +0000 UTC m=+0.174897362 container exec_died d96827cd9c29e53bbdf4cef10942608e4ba405294733072b4aa624c0238e2ed8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Jan 23 11:41:51 compute-0 systemd[1]: libpod-conmon-d96827cd9c29e53bbdf4cef10942608e4ba405294733072b4aa624c0238e2ed8.scope: Deactivated successfully.
Jan 23 11:41:51 compute-0 sudo[225378]: pam_unix(sudo:session): session closed for user root
Jan 23 11:41:52 compute-0 sudo[225561]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jezgvrzljhyhhzutdilnqcoivheavuux ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168512.0894094-692-123291297082144/AnsiballZ_podman_container_exec.py'
Jan 23 11:41:52 compute-0 sudo[225561]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:41:52 compute-0 python3.9[225563]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 23 11:41:52 compute-0 systemd[1]: Started libpod-conmon-d96827cd9c29e53bbdf4cef10942608e4ba405294733072b4aa624c0238e2ed8.scope.
Jan 23 11:41:52 compute-0 podman[225564]: 2026-01-23 11:41:52.731474849 +0000 UTC m=+0.098831858 container exec d96827cd9c29e53bbdf4cef10942608e4ba405294733072b4aa624c0238e2ed8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 23 11:41:52 compute-0 podman[225564]: 2026-01-23 11:41:52.766082663 +0000 UTC m=+0.133439572 container exec_died d96827cd9c29e53bbdf4cef10942608e4ba405294733072b4aa624c0238e2ed8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Jan 23 11:41:52 compute-0 systemd[1]: libpod-conmon-d96827cd9c29e53bbdf4cef10942608e4ba405294733072b4aa624c0238e2ed8.scope: Deactivated successfully.
Jan 23 11:41:52 compute-0 sudo[225561]: pam_unix(sudo:session): session closed for user root
Jan 23 11:41:53 compute-0 sudo[225757]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qslaitjbbvfbwzeswwmmelscqtronucj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168513.0331283-700-7746227075348/AnsiballZ_file.py'
Jan 23 11:41:53 compute-0 sudo[225757]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:41:53 compute-0 podman[225717]: 2026-01-23 11:41:53.424626781 +0000 UTC m=+0.071036884 container health_status cde20f10ae383cce1365a41265bac0a75ea71c31a21a1539f187bef9d678e8d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, vcs-type=git, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, distribution-scope=public, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=openstack_network_exporter, version=9.6, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, release=1755695350, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, name=ubi9-minimal)
Jan 23 11:41:53 compute-0 python3.9[225764]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_metadata_agent recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:41:53 compute-0 sudo[225757]: pam_unix(sudo:session): session closed for user root
Jan 23 11:41:54 compute-0 sudo[225914]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-odqmxnljsayudrmzgzkcfkjbpquvltxf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168513.9209433-709-105332443945523/AnsiballZ_podman_container_info.py'
Jan 23 11:41:54 compute-0 sudo[225914]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:41:54 compute-0 python3.9[225916]: ansible-containers.podman.podman_container_info Invoked with name=['ceilometer_agent_compute'] executable=podman
Jan 23 11:41:54 compute-0 sudo[225914]: pam_unix(sudo:session): session closed for user root
Jan 23 11:41:55 compute-0 sudo[226079]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wggabtsgrgkgovkdehznybrwjamflxvx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168514.9714906-717-1198769065239/AnsiballZ_podman_container_exec.py'
Jan 23 11:41:55 compute-0 sudo[226079]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:41:55 compute-0 python3.9[226081]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 23 11:41:55 compute-0 systemd[1]: Started libpod-conmon-6ec039018dddd109dd56b3f3912ce4a80c166b5fb98c417c5e3cfbbdfbfbeaad.scope.
Jan 23 11:41:55 compute-0 podman[226082]: 2026-01-23 11:41:55.635127318 +0000 UTC m=+0.090100970 container exec 6ec039018dddd109dd56b3f3912ce4a80c166b5fb98c417c5e3cfbbdfbfbeaad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20260120)
Jan 23 11:41:55 compute-0 podman[226082]: 2026-01-23 11:41:55.666010789 +0000 UTC m=+0.120984421 container exec_died 6ec039018dddd109dd56b3f3912ce4a80c166b5fb98c417c5e3cfbbdfbfbeaad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, tcib_managed=true, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260120, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 23 11:41:55 compute-0 systemd[1]: libpod-conmon-6ec039018dddd109dd56b3f3912ce4a80c166b5fb98c417c5e3cfbbdfbfbeaad.scope: Deactivated successfully.
Jan 23 11:41:55 compute-0 sudo[226079]: pam_unix(sudo:session): session closed for user root
Jan 23 11:41:56 compute-0 sudo[226263]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ousdwcvirfshyqdddrkcsevdxtieyshv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168515.9358308-725-271788046910926/AnsiballZ_podman_container_exec.py'
Jan 23 11:41:56 compute-0 sudo[226263]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:41:56 compute-0 python3.9[226265]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 23 11:41:56 compute-0 systemd[1]: Started libpod-conmon-6ec039018dddd109dd56b3f3912ce4a80c166b5fb98c417c5e3cfbbdfbfbeaad.scope.
Jan 23 11:41:56 compute-0 podman[226266]: 2026-01-23 11:41:56.893727855 +0000 UTC m=+0.119211747 container exec 6ec039018dddd109dd56b3f3912ce4a80c166b5fb98c417c5e3cfbbdfbfbeaad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, org.label-schema.build-date=20260120, org.label-schema.schema-version=1.0, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.license=GPLv2)
Jan 23 11:41:56 compute-0 podman[226266]: 2026-01-23 11:41:56.92675874 +0000 UTC m=+0.152242582 container exec_died 6ec039018dddd109dd56b3f3912ce4a80c166b5fb98c417c5e3cfbbdfbfbeaad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, managed_by=edpm_ansible, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260120, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute)
Jan 23 11:41:56 compute-0 systemd[1]: libpod-conmon-6ec039018dddd109dd56b3f3912ce4a80c166b5fb98c417c5e3cfbbdfbfbeaad.scope: Deactivated successfully.
Jan 23 11:41:56 compute-0 sudo[226263]: pam_unix(sudo:session): session closed for user root
Jan 23 11:41:57 compute-0 sudo[226446]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xudotvlnatxxlwowtckhbvqoqtrbukrg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168517.234451-733-275732358365861/AnsiballZ_file.py'
Jan 23 11:41:57 compute-0 sudo[226446]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:41:57 compute-0 python3.9[226448]: ansible-ansible.builtin.file Invoked with group=42405 mode=0700 owner=42405 path=/var/lib/openstack/healthchecks/ceilometer_agent_compute recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:41:57 compute-0 sudo[226446]: pam_unix(sudo:session): session closed for user root
Jan 23 11:41:58 compute-0 sudo[226598]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ecrmnfccgyymopbheoamdtxfinsdkfvv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168518.2539773-742-128506122918692/AnsiballZ_podman_container_info.py'
Jan 23 11:41:58 compute-0 sudo[226598]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:41:58 compute-0 python3.9[226600]: ansible-containers.podman.podman_container_info Invoked with name=['node_exporter'] executable=podman
Jan 23 11:41:58 compute-0 sudo[226598]: pam_unix(sudo:session): session closed for user root
Jan 23 11:41:59 compute-0 sudo[226760]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gcwreyvbvyzouchrojsfqgkfllfuxcyj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168519.197105-750-61182469358162/AnsiballZ_podman_container_exec.py'
Jan 23 11:41:59 compute-0 sudo[226760]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:41:59 compute-0 podman[201022]: time="2026-01-23T11:41:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 23 11:41:59 compute-0 podman[201022]: @ - - [23/Jan/2026:11:41:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27277 "" "Go-http-client/1.1"
Jan 23 11:41:59 compute-0 podman[201022]: @ - - [23/Jan/2026:11:41:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3841 "" "Go-http-client/1.1"
Jan 23 11:41:59 compute-0 python3.9[226762]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 23 11:41:59 compute-0 systemd[1]: Started libpod-conmon-99ee297e6e25b500e7af118e58bbafc761d2fd7202cdfcf4c976c2a99866b5ef.scope.
Jan 23 11:41:59 compute-0 podman[226763]: 2026-01-23 11:41:59.956532797 +0000 UTC m=+0.112874769 container exec 99ee297e6e25b500e7af118e58bbafc761d2fd7202cdfcf4c976c2a99866b5ef (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 23 11:41:59 compute-0 podman[226763]: 2026-01-23 11:41:59.988589317 +0000 UTC m=+0.144931229 container exec_died 99ee297e6e25b500e7af118e58bbafc761d2fd7202cdfcf4c976c2a99866b5ef (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 23 11:42:00 compute-0 systemd[1]: libpod-conmon-99ee297e6e25b500e7af118e58bbafc761d2fd7202cdfcf4c976c2a99866b5ef.scope: Deactivated successfully.
Jan 23 11:42:00 compute-0 sudo[226760]: pam_unix(sudo:session): session closed for user root
Jan 23 11:42:00 compute-0 sudo[226943]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qskbqqxblpwjfzzprgxtpqoddfclldup ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168520.2170641-758-116347653544422/AnsiballZ_podman_container_exec.py'
Jan 23 11:42:00 compute-0 sudo[226943]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:42:00 compute-0 python3.9[226945]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 23 11:42:00 compute-0 systemd[1]: Started libpod-conmon-99ee297e6e25b500e7af118e58bbafc761d2fd7202cdfcf4c976c2a99866b5ef.scope.
Jan 23 11:42:00 compute-0 podman[226946]: 2026-01-23 11:42:00.988929697 +0000 UTC m=+0.102883439 container exec 99ee297e6e25b500e7af118e58bbafc761d2fd7202cdfcf4c976c2a99866b5ef (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 23 11:42:01 compute-0 podman[226946]: 2026-01-23 11:42:01.02107951 +0000 UTC m=+0.135033232 container exec_died 99ee297e6e25b500e7af118e58bbafc761d2fd7202cdfcf4c976c2a99866b5ef (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 23 11:42:01 compute-0 sudo[226943]: pam_unix(sudo:session): session closed for user root
Jan 23 11:42:01 compute-0 systemd[1]: libpod-conmon-99ee297e6e25b500e7af118e58bbafc761d2fd7202cdfcf4c976c2a99866b5ef.scope: Deactivated successfully.
Jan 23 11:42:01 compute-0 nova_compute[185173]: 2026-01-23 11:42:01.235 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:42:01 compute-0 nova_compute[185173]: 2026-01-23 11:42:01.272 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 11:42:01 compute-0 nova_compute[185173]: 2026-01-23 11:42:01.273 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 11:42:01 compute-0 nova_compute[185173]: 2026-01-23 11:42:01.273 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 11:42:01 compute-0 nova_compute[185173]: 2026-01-23 11:42:01.273 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 23 11:42:01 compute-0 openstack_network_exporter[204160]: ERROR   11:42:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 23 11:42:01 compute-0 openstack_network_exporter[204160]: 
Jan 23 11:42:01 compute-0 openstack_network_exporter[204160]: ERROR   11:42:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 23 11:42:01 compute-0 openstack_network_exporter[204160]: 
Jan 23 11:42:01 compute-0 nova_compute[185173]: 2026-01-23 11:42:01.603 185177 WARNING nova.virt.libvirt.driver [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 23 11:42:01 compute-0 nova_compute[185173]: 2026-01-23 11:42:01.604 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5778MB free_disk=72.47836303710938GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 23 11:42:01 compute-0 nova_compute[185173]: 2026-01-23 11:42:01.605 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 11:42:01 compute-0 nova_compute[185173]: 2026-01-23 11:42:01.605 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 11:42:01 compute-0 sudo[227125]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tkwpkjcpsteyxuvgyjzrbkhltkauudkk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168521.2945104-766-15578681871554/AnsiballZ_file.py'
Jan 23 11:42:01 compute-0 sudo[227125]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:42:01 compute-0 nova_compute[185173]: 2026-01-23 11:42:01.668 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 23 11:42:01 compute-0 nova_compute[185173]: 2026-01-23 11:42:01.668 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 23 11:42:01 compute-0 nova_compute[185173]: 2026-01-23 11:42:01.700 185177 DEBUG nova.compute.provider_tree [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Inventory has not changed in ProviderTree for provider: 77dd020c-2f5c-40b0-b660-8a95a28aabbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 23 11:42:01 compute-0 nova_compute[185173]: 2026-01-23 11:42:01.717 185177 DEBUG nova.scheduler.client.report [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Inventory has not changed for provider 77dd020c-2f5c-40b0-b660-8a95a28aabbd based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 23 11:42:01 compute-0 nova_compute[185173]: 2026-01-23 11:42:01.718 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 23 11:42:01 compute-0 nova_compute[185173]: 2026-01-23 11:42:01.719 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.114s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 11:42:01 compute-0 python3.9[227127]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/node_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:42:01 compute-0 sudo[227125]: pam_unix(sudo:session): session closed for user root
Jan 23 11:42:02 compute-0 sudo[227295]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nzandoumiqymfptwbsijmnltxxxgitsy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168522.187509-775-200081426019952/AnsiballZ_podman_container_info.py'
Jan 23 11:42:02 compute-0 podman[227251]: 2026-01-23 11:42:02.66524222 +0000 UTC m=+0.083099715 container health_status 6ec039018dddd109dd56b3f3912ce4a80c166b5fb98c417c5e3cfbbdfbfbeaad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20260120, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=93ecf842527b95c82e14fba92451bd07)
Jan 23 11:42:02 compute-0 sudo[227295]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:42:02 compute-0 nova_compute[185173]: 2026-01-23 11:42:02.719 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:42:02 compute-0 python3.9[227298]: ansible-containers.podman.podman_container_info Invoked with name=['podman_exporter'] executable=podman
Jan 23 11:42:02 compute-0 sudo[227295]: pam_unix(sudo:session): session closed for user root
Jan 23 11:42:03 compute-0 nova_compute[185173]: 2026-01-23 11:42:03.231 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:42:03 compute-0 nova_compute[185173]: 2026-01-23 11:42:03.234 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:42:03 compute-0 nova_compute[185173]: 2026-01-23 11:42:03.235 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 23 11:42:03 compute-0 nova_compute[185173]: 2026-01-23 11:42:03.235 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 23 11:42:03 compute-0 nova_compute[185173]: 2026-01-23 11:42:03.274 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 23 11:42:03 compute-0 nova_compute[185173]: 2026-01-23 11:42:03.274 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:42:03 compute-0 nova_compute[185173]: 2026-01-23 11:42:03.274 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:42:03 compute-0 nova_compute[185173]: 2026-01-23 11:42:03.274 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:42:03 compute-0 sudo[227462]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zucpvelhyninmmoqyoirlaherzwuvier ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168523.3562844-783-151691954151991/AnsiballZ_podman_container_exec.py'
Jan 23 11:42:03 compute-0 sudo[227462]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:42:03 compute-0 python3.9[227464]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 23 11:42:03 compute-0 systemd[1]: Started libpod-conmon-48bfd3e93cfb033a8917f154ab637a84f3f60f7609564292c230ce848bae7693.scope.
Jan 23 11:42:03 compute-0 podman[227465]: 2026-01-23 11:42:03.987354393 +0000 UTC m=+0.102489640 container exec 48bfd3e93cfb033a8917f154ab637a84f3f60f7609564292c230ce848bae7693 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Jan 23 11:42:04 compute-0 podman[227465]: 2026-01-23 11:42:04.019489205 +0000 UTC m=+0.134624432 container exec_died 48bfd3e93cfb033a8917f154ab637a84f3f60f7609564292c230ce848bae7693 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 23 11:42:04 compute-0 systemd[1]: libpod-conmon-48bfd3e93cfb033a8917f154ab637a84f3f60f7609564292c230ce848bae7693.scope: Deactivated successfully.
Jan 23 11:42:04 compute-0 sudo[227462]: pam_unix(sudo:session): session closed for user root
Jan 23 11:42:04 compute-0 nova_compute[185173]: 2026-01-23 11:42:04.235 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:42:04 compute-0 podman[227617]: 2026-01-23 11:42:04.663178442 +0000 UTC m=+0.071568917 container health_status 48bfd3e93cfb033a8917f154ab637a84f3f60f7609564292c230ce848bae7693 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 23 11:42:04 compute-0 sudo[227662]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fatgwrdooqzxkymjwtcmixewbekvtttc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168524.2816274-791-18219570436194/AnsiballZ_podman_container_exec.py'
Jan 23 11:42:04 compute-0 sudo[227662]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:42:04 compute-0 python3.9[227668]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 23 11:42:04 compute-0 systemd[1]: Started libpod-conmon-48bfd3e93cfb033a8917f154ab637a84f3f60f7609564292c230ce848bae7693.scope.
Jan 23 11:42:04 compute-0 podman[227669]: 2026-01-23 11:42:04.943607062 +0000 UTC m=+0.075221169 container exec 48bfd3e93cfb033a8917f154ab637a84f3f60f7609564292c230ce848bae7693 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 23 11:42:04 compute-0 podman[227669]: 2026-01-23 11:42:04.97437893 +0000 UTC m=+0.105993037 container exec_died 48bfd3e93cfb033a8917f154ab637a84f3f60f7609564292c230ce848bae7693 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Jan 23 11:42:05 compute-0 systemd[1]: libpod-conmon-48bfd3e93cfb033a8917f154ab637a84f3f60f7609564292c230ce848bae7693.scope: Deactivated successfully.
Jan 23 11:42:05 compute-0 sudo[227662]: pam_unix(sudo:session): session closed for user root
Jan 23 11:42:05 compute-0 nova_compute[185173]: 2026-01-23 11:42:05.230 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:42:05 compute-0 sudo[227846]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yvhjjmxpwrlrrrlkzptdjedmoyusmewo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168525.2091324-799-76752495385084/AnsiballZ_file.py'
Jan 23 11:42:05 compute-0 sudo[227846]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:42:05 compute-0 python3.9[227848]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/podman_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:42:05 compute-0 sudo[227846]: pam_unix(sudo:session): session closed for user root
Jan 23 11:42:06 compute-0 nova_compute[185173]: 2026-01-23 11:42:06.235 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:42:06 compute-0 nova_compute[185173]: 2026-01-23 11:42:06.236 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 23 11:42:06 compute-0 sudo[228008]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-byjimgcwvocxkkzuhuzkcjgxagvknjox ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168526.027985-808-36893954209471/AnsiballZ_podman_container_info.py'
Jan 23 11:42:06 compute-0 sudo[228008]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:42:06 compute-0 podman[227972]: 2026-01-23 11:42:06.398861257 +0000 UTC m=+0.074167142 container health_status d96827cd9c29e53bbdf4cef10942608e4ba405294733072b4aa624c0238e2ed8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 23 11:42:06 compute-0 python3.9[228016]: ansible-containers.podman.podman_container_info Invoked with name=['openstack_network_exporter'] executable=podman
Jan 23 11:42:06 compute-0 sudo[228008]: pam_unix(sudo:session): session closed for user root
Jan 23 11:42:07 compute-0 sudo[228180]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vdaudjqzmxijvgiggncuvuwhzwgdgvsy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168526.864043-816-122954639919961/AnsiballZ_podman_container_exec.py'
Jan 23 11:42:07 compute-0 sudo[228180]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:42:07 compute-0 python3.9[228182]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 23 11:42:07 compute-0 systemd[1]: Started libpod-conmon-cde20f10ae383cce1365a41265bac0a75ea71c31a21a1539f187bef9d678e8d7.scope.
Jan 23 11:42:07 compute-0 podman[228183]: 2026-01-23 11:42:07.591648711 +0000 UTC m=+0.133245417 container exec cde20f10ae383cce1365a41265bac0a75ea71c31a21a1539f187bef9d678e8d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, version=9.6, config_id=openstack_network_exporter, distribution-scope=public)
Jan 23 11:42:07 compute-0 podman[228183]: 2026-01-23 11:42:07.625169338 +0000 UTC m=+0.166766044 container exec_died cde20f10ae383cce1365a41265bac0a75ea71c31a21a1539f187bef9d678e8d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, managed_by=edpm_ansible, architecture=x86_64, vcs-type=git, vendor=Red Hat, Inc., config_id=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, version=9.6, name=ubi9-minimal, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers)
Jan 23 11:42:07 compute-0 systemd[1]: libpod-conmon-cde20f10ae383cce1365a41265bac0a75ea71c31a21a1539f187bef9d678e8d7.scope: Deactivated successfully.
Jan 23 11:42:07 compute-0 sudo[228180]: pam_unix(sudo:session): session closed for user root
Jan 23 11:42:08 compute-0 sudo[228384]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-crvnjphuynwoyxgokxlymuunljsumtsn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168528.0338418-824-265253993565978/AnsiballZ_podman_container_exec.py'
Jan 23 11:42:08 compute-0 sudo[228384]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:42:08 compute-0 podman[228337]: 2026-01-23 11:42:08.551638014 +0000 UTC m=+0.137762179 container health_status 1cc877fed4914980324cf4c0d6ba23743fd113442cee4d49cc1a59e402757170 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 23 11:42:08 compute-0 python3.9[228390]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 23 11:42:08 compute-0 systemd[1]: Started libpod-conmon-cde20f10ae383cce1365a41265bac0a75ea71c31a21a1539f187bef9d678e8d7.scope.
Jan 23 11:42:08 compute-0 podman[228392]: 2026-01-23 11:42:08.891903488 +0000 UTC m=+0.143297818 container exec cde20f10ae383cce1365a41265bac0a75ea71c31a21a1539f187bef9d678e8d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, container_name=openstack_network_exporter, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=openstack_network_exporter, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, vcs-type=git, managed_by=edpm_ansible, release=1755695350, version=9.6, vendor=Red Hat, Inc., architecture=x86_64, name=ubi9-minimal, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Jan 23 11:42:08 compute-0 podman[228392]: 2026-01-23 11:42:08.92523578 +0000 UTC m=+0.176630090 container exec_died cde20f10ae383cce1365a41265bac0a75ea71c31a21a1539f187bef9d678e8d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, vcs-type=git, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=openstack_network_exporter, distribution-scope=public, io.buildah.version=1.33.7, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350, architecture=x86_64, io.openshift.expose-services=, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, name=ubi9-minimal)
Jan 23 11:42:08 compute-0 sudo[228384]: pam_unix(sudo:session): session closed for user root
Jan 23 11:42:08 compute-0 systemd[1]: libpod-conmon-cde20f10ae383cce1365a41265bac0a75ea71c31a21a1539f187bef9d678e8d7.scope: Deactivated successfully.
Jan 23 11:42:09 compute-0 sudo[228570]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rlzaxpdhzvvdscsnnoyzmiwyhrprhjno ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168529.2091806-832-38712862440499/AnsiballZ_file.py'
Jan 23 11:42:09 compute-0 sudo[228570]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:42:09 compute-0 python3.9[228572]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/openstack_network_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:42:09 compute-0 sudo[228570]: pam_unix(sudo:session): session closed for user root
Jan 23 11:42:10 compute-0 sudo[228722]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pmialazujjxigpkyndqncbfjymabkqtp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168530.2430398-841-42712726354176/AnsiballZ_podman_container_info.py'
Jan 23 11:42:10 compute-0 sudo[228722]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:42:10 compute-0 python3.9[228724]: ansible-containers.podman.podman_container_info Invoked with name=['ceilometer_agent_ipmi'] executable=podman
Jan 23 11:42:11 compute-0 sudo[228722]: pam_unix(sudo:session): session closed for user root
Jan 23 11:42:11 compute-0 sudo[228887]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xutwrxzpwvdpvlocpbfdzytvjlikxrtz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168531.3452482-849-86321419752063/AnsiballZ_podman_container_exec.py'
Jan 23 11:42:11 compute-0 sudo[228887]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:42:12 compute-0 python3.9[228889]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ceilometer_agent_ipmi detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 23 11:42:12 compute-0 systemd[1]: Started libpod-conmon-adf529ba1b6aae11f18bcfacdd7f5850af0b6e6af2250d4a705be9c346f3f5af.scope.
Jan 23 11:42:12 compute-0 podman[228890]: 2026-01-23 11:42:12.149965564 +0000 UTC m=+0.128844787 container exec adf529ba1b6aae11f18bcfacdd7f5850af0b6e6af2250d4a705be9c346f3f5af (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 23 11:42:12 compute-0 podman[228890]: 2026-01-23 11:42:12.184539637 +0000 UTC m=+0.163418870 container exec_died adf529ba1b6aae11f18bcfacdd7f5850af0b6e6af2250d4a705be9c346f3f5af (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 23 11:42:12 compute-0 systemd[1]: libpod-conmon-adf529ba1b6aae11f18bcfacdd7f5850af0b6e6af2250d4a705be9c346f3f5af.scope: Deactivated successfully.
Jan 23 11:42:12 compute-0 sudo[228887]: pam_unix(sudo:session): session closed for user root
Jan 23 11:42:12 compute-0 sudo[229088]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nsebxxgjjpswtteltzjjnqalvbzkyilt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168532.4768026-857-265052255761070/AnsiballZ_podman_container_exec.py'
Jan 23 11:42:12 compute-0 sudo[229088]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:42:12 compute-0 podman[229045]: 2026-01-23 11:42:12.874018657 +0000 UTC m=+0.074346896 container health_status adf529ba1b6aae11f18bcfacdd7f5850af0b6e6af2250d4a705be9c346f3f5af (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=starting, health_failing_streak=2, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 23 11:42:12 compute-0 systemd[1]: adf529ba1b6aae11f18bcfacdd7f5850af0b6e6af2250d4a705be9c346f3f5af-71511985ccc5dbd9.service: Main process exited, code=exited, status=1/FAILURE
Jan 23 11:42:12 compute-0 systemd[1]: adf529ba1b6aae11f18bcfacdd7f5850af0b6e6af2250d4a705be9c346f3f5af-71511985ccc5dbd9.service: Failed with result 'exit-code'.
Jan 23 11:42:13 compute-0 python3.9[229092]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ceilometer_agent_ipmi detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 23 11:42:13 compute-0 systemd[1]: Started libpod-conmon-adf529ba1b6aae11f18bcfacdd7f5850af0b6e6af2250d4a705be9c346f3f5af.scope.
Jan 23 11:42:13 compute-0 podman[229093]: 2026-01-23 11:42:13.215598434 +0000 UTC m=+0.129104104 container exec adf529ba1b6aae11f18bcfacdd7f5850af0b6e6af2250d4a705be9c346f3f5af (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 11:42:13 compute-0 podman[229093]: 2026-01-23 11:42:13.247866509 +0000 UTC m=+0.161372159 container exec_died adf529ba1b6aae11f18bcfacdd7f5850af0b6e6af2250d4a705be9c346f3f5af (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Jan 23 11:42:13 compute-0 sudo[229088]: pam_unix(sudo:session): session closed for user root
Jan 23 11:42:13 compute-0 systemd[1]: libpod-conmon-adf529ba1b6aae11f18bcfacdd7f5850af0b6e6af2250d4a705be9c346f3f5af.scope: Deactivated successfully.
Jan 23 11:42:13 compute-0 sudo[229274]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hrjscoxhtfgewlmrfdpdjykvcfurggvf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168533.5162559-865-59707182707021/AnsiballZ_file.py'
Jan 23 11:42:13 compute-0 sudo[229274]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:42:14 compute-0 python3.9[229276]: ansible-ansible.builtin.file Invoked with group=42405 mode=0700 owner=42405 path=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:42:14 compute-0 sudo[229274]: pam_unix(sudo:session): session closed for user root
Jan 23 11:42:14 compute-0 podman[229376]: 2026-01-23 11:42:14.755032011 +0000 UTC m=+0.086817618 container health_status 900ef841977ab427bb05b895d10e0cac749b9185cccc7bb7aaf2b3886aa6449a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, version=9.4, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., name=ubi9, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, build-date=2024-09-18T21:23:30, release=1214.1726694543, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=kepler)
Jan 23 11:42:14 compute-0 sudo[229446]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-avaonavusehwrnntyuerhdlvleffvtsu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168534.4478557-874-29306955625072/AnsiballZ_podman_container_info.py'
Jan 23 11:42:14 compute-0 sudo[229446]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:42:15 compute-0 python3.9[229448]: ansible-containers.podman.podman_container_info Invoked with name=['kepler'] executable=podman
Jan 23 11:42:15 compute-0 sudo[229446]: pam_unix(sudo:session): session closed for user root
Jan 23 11:42:15 compute-0 sudo[229611]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ncaoixmbdhyyxomgsnanwfoquktepust ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168535.431736-882-107803842343362/AnsiballZ_podman_container_exec.py'
Jan 23 11:42:15 compute-0 sudo[229611]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:42:15 compute-0 python3.9[229613]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=kepler detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 23 11:42:16 compute-0 systemd[1]: Started libpod-conmon-900ef841977ab427bb05b895d10e0cac749b9185cccc7bb7aaf2b3886aa6449a.scope.
Jan 23 11:42:16 compute-0 podman[229614]: 2026-01-23 11:42:16.087860711 +0000 UTC m=+0.086295075 container exec 900ef841977ab427bb05b895d10e0cac749b9185cccc7bb7aaf2b3886aa6449a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, maintainer=Red Hat, Inc., release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, io.openshift.tags=base rhel9, architecture=x86_64, container_name=kepler, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vcs-type=git, distribution-scope=public, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., config_id=kepler, io.openshift.expose-services=, version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, com.redhat.component=ubi9-container)
Jan 23 11:42:16 compute-0 podman[229614]: 2026-01-23 11:42:16.123055949 +0000 UTC m=+0.121490293 container exec_died 900ef841977ab427bb05b895d10e0cac749b9185cccc7bb7aaf2b3886aa6449a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, distribution-scope=public, build-date=2024-09-18T21:23:30, container_name=kepler, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, com.redhat.component=ubi9-container, release=1214.1726694543, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.expose-services=, managed_by=edpm_ansible, vcs-type=git, version=9.4, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, io.buildah.version=1.29.0)
Jan 23 11:42:16 compute-0 systemd[1]: libpod-conmon-900ef841977ab427bb05b895d10e0cac749b9185cccc7bb7aaf2b3886aa6449a.scope: Deactivated successfully.
Jan 23 11:42:16 compute-0 sudo[229611]: pam_unix(sudo:session): session closed for user root
Jan 23 11:42:16 compute-0 sudo[229793]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zueqotghlkiqhlfhzggvnbhboerygeuw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168536.3772883-890-275748564537293/AnsiballZ_podman_container_exec.py'
Jan 23 11:42:16 compute-0 sudo[229793]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:42:17 compute-0 python3.9[229795]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=kepler detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 23 11:42:17 compute-0 systemd[1]: Started libpod-conmon-900ef841977ab427bb05b895d10e0cac749b9185cccc7bb7aaf2b3886aa6449a.scope.
Jan 23 11:42:17 compute-0 podman[229796]: 2026-01-23 11:42:17.164092284 +0000 UTC m=+0.094151731 container exec 900ef841977ab427bb05b895d10e0cac749b9185cccc7bb7aaf2b3886aa6449a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, io.buildah.version=1.29.0, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., release-0.7.12=, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., io.openshift.tags=base rhel9, architecture=x86_64, config_id=kepler, managed_by=edpm_ansible, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, io.openshift.expose-services=, vcs-type=git, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler)
Jan 23 11:42:17 compute-0 podman[229796]: 2026-01-23 11:42:17.197912679 +0000 UTC m=+0.127972126 container exec_died 900ef841977ab427bb05b895d10e0cac749b9185cccc7bb7aaf2b3886aa6449a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., release-0.7.12=, architecture=x86_64, config_id=kepler, container_name=kepler, io.buildah.version=1.29.0, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., vcs-type=git, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, build-date=2024-09-18T21:23:30, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, maintainer=Red Hat, Inc., com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, version=9.4, io.openshift.expose-services=, io.openshift.tags=base rhel9, managed_by=edpm_ansible)
Jan 23 11:42:17 compute-0 systemd[1]: libpod-conmon-900ef841977ab427bb05b895d10e0cac749b9185cccc7bb7aaf2b3886aa6449a.scope: Deactivated successfully.
Jan 23 11:42:17 compute-0 sudo[229793]: pam_unix(sudo:session): session closed for user root
Jan 23 11:42:17 compute-0 sudo[229973]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nnzltplfjodsulzetqwojksjugjtjfeq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168537.4592485-898-115339771473009/AnsiballZ_file.py'
Jan 23 11:42:17 compute-0 sudo[229973]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:42:18 compute-0 python3.9[229975]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/kepler recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:42:18 compute-0 sudo[229973]: pam_unix(sudo:session): session closed for user root
Jan 23 11:42:18 compute-0 podman[230049]: 2026-01-23 11:42:18.76875918 +0000 UTC m=+0.082843389 container health_status 99ee297e6e25b500e7af118e58bbafc761d2fd7202cdfcf4c976c2a99866b5ef (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 23 11:42:18 compute-0 sudo[230149]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tsbmzpfntqpgrvnvdzuyghobuxgpkngf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168538.5774114-907-77198189109747/AnsiballZ_file.py'
Jan 23 11:42:18 compute-0 sudo[230149]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:42:19 compute-0 python3.9[230151]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall/ state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:42:19 compute-0 sudo[230149]: pam_unix(sudo:session): session closed for user root
Jan 23 11:42:19 compute-0 sudo[230302]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-llxivggscjjqxsuyovrnasdiuvddsbdf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168539.4177043-915-125389714182474/AnsiballZ_stat.py'
Jan 23 11:42:19 compute-0 sudo[230302]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:42:20 compute-0 python3.9[230304]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/kepler.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:42:20 compute-0 sudo[230302]: pam_unix(sudo:session): session closed for user root
Jan 23 11:42:20 compute-0 sudo[230425]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fzxkddgniuftvdlmkhutpdjohoybhtyf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168539.4177043-915-125389714182474/AnsiballZ_copy.py'
Jan 23 11:42:20 compute-0 sudo[230425]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:42:20 compute-0 python3.9[230427]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/kepler.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1769168539.4177043-915-125389714182474/.source.yaml _original_basename=firewall.yaml follow=False checksum=40b8960d32c81de936cddbeb137a8240ecc54e7b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:42:20 compute-0 sudo[230425]: pam_unix(sudo:session): session closed for user root
Jan 23 11:42:21 compute-0 sudo[230577]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-awaaporxvknuxyycswvgppsmgknykhpj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168541.235127-931-197985274056632/AnsiballZ_file.py'
Jan 23 11:42:21 compute-0 sudo[230577]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:42:21 compute-0 python3.9[230579]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:42:21 compute-0 sudo[230577]: pam_unix(sudo:session): session closed for user root
Jan 23 11:42:22 compute-0 sudo[230729]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iskpxazihwzhiymzuyleorzfvjdtozbh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168542.1643834-939-239459599678960/AnsiballZ_stat.py'
Jan 23 11:42:22 compute-0 sudo[230729]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:42:22 compute-0 python3.9[230731]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:42:22 compute-0 sudo[230729]: pam_unix(sudo:session): session closed for user root
Jan 23 11:42:23 compute-0 sudo[230807]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bxofynnnjmajgbrklammyofhrcmgfbwb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168542.1643834-939-239459599678960/AnsiballZ_file.py'
Jan 23 11:42:23 compute-0 sudo[230807]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:42:23 compute-0 python3.9[230809]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:42:23 compute-0 sudo[230807]: pam_unix(sudo:session): session closed for user root
Jan 23 11:42:23 compute-0 podman[230910]: 2026-01-23 11:42:23.751868345 +0000 UTC m=+0.079882304 container health_status cde20f10ae383cce1365a41265bac0a75ea71c31a21a1539f187bef9d678e8d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=openstack_network_exporter, release=1755695350, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, managed_by=edpm_ansible, architecture=x86_64, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, container_name=openstack_network_exporter, distribution-scope=public, name=ubi9-minimal)
Jan 23 11:42:23 compute-0 sudo[230980]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ohlswzkxzicctvkayzwlctxfxvgwhwjt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168543.4378011-951-214578946807061/AnsiballZ_stat.py'
Jan 23 11:42:23 compute-0 sudo[230980]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:42:23 compute-0 python3.9[230982]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:42:24 compute-0 sudo[230980]: pam_unix(sudo:session): session closed for user root
Jan 23 11:42:24 compute-0 sudo[231058]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvopzihzhsaurtdcuihowklhmmmkhnaq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168543.4378011-951-214578946807061/AnsiballZ_file.py'
Jan 23 11:42:24 compute-0 sudo[231058]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:42:24 compute-0 python3.9[231060]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.g8cu5e2p recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:42:24 compute-0 sudo[231058]: pam_unix(sudo:session): session closed for user root
Jan 23 11:42:25 compute-0 sudo[231210]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ckgnjemruymnxkymkmzsgpdovmdpcczr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168544.7382727-963-137673880259554/AnsiballZ_stat.py'
Jan 23 11:42:25 compute-0 sudo[231210]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:42:25 compute-0 python3.9[231212]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:42:25 compute-0 sudo[231210]: pam_unix(sudo:session): session closed for user root
Jan 23 11:42:25 compute-0 sudo[231288]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jrcctfqopjzhxpeaqxejbfelwosuhosv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168544.7382727-963-137673880259554/AnsiballZ_file.py'
Jan 23 11:42:25 compute-0 sudo[231288]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:42:26 compute-0 python3.9[231290]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:42:26 compute-0 sudo[231288]: pam_unix(sudo:session): session closed for user root
Jan 23 11:42:26 compute-0 sudo[231440]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gjamnuvcwgvmfzpzjlybnoddyrzxdyjh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168546.3272727-976-223841243295133/AnsiballZ_command.py'
Jan 23 11:42:26 compute-0 sudo[231440]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:42:26 compute-0 python3.9[231442]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 11:42:26 compute-0 sudo[231440]: pam_unix(sudo:session): session closed for user root
Jan 23 11:42:27 compute-0 sudo[231593]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aviogwuvdtyanmowbmhhjucbbuhipenx ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769168547.1949563-984-85503089982080/AnsiballZ_edpm_nftables_from_files.py'
Jan 23 11:42:27 compute-0 sudo[231593]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:42:27 compute-0 python3[231595]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 23 11:42:27 compute-0 sudo[231593]: pam_unix(sudo:session): session closed for user root
Jan 23 11:42:28 compute-0 sudo[231745]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vblnhbvhccxhlnyuyegrvzmhvdictvzm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168548.2094095-992-3293714342181/AnsiballZ_stat.py'
Jan 23 11:42:28 compute-0 sudo[231745]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:42:28 compute-0 python3.9[231747]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:42:28 compute-0 sudo[231745]: pam_unix(sudo:session): session closed for user root
Jan 23 11:42:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:42:29.083 106832 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 11:42:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:42:29.083 106832 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 11:42:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:42:29.083 106832 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 11:42:29 compute-0 sudo[231823]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gphflxdzztozihxkfrtyngsbintrahch ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168548.2094095-992-3293714342181/AnsiballZ_file.py'
Jan 23 11:42:29 compute-0 sudo[231823]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:42:29 compute-0 python3.9[231825]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:42:29 compute-0 sudo[231823]: pam_unix(sudo:session): session closed for user root
Jan 23 11:42:29 compute-0 podman[201022]: time="2026-01-23T11:42:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 23 11:42:29 compute-0 podman[201022]: @ - - [23/Jan/2026:11:42:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27276 "" "Go-http-client/1.1"
Jan 23 11:42:29 compute-0 podman[201022]: @ - - [23/Jan/2026:11:42:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3843 "" "Go-http-client/1.1"
Jan 23 11:42:30 compute-0 sudo[231976]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbdoluqlrtkiwrmicmjmbsopodzzqzme ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168549.7142384-1004-160899281586426/AnsiballZ_stat.py'
Jan 23 11:42:30 compute-0 sudo[231976]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:42:30 compute-0 python3.9[231978]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:42:30 compute-0 sudo[231976]: pam_unix(sudo:session): session closed for user root
Jan 23 11:42:30 compute-0 sudo[232054]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ssoevhspkhsneygeacgzukgatbzpohin ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168549.7142384-1004-160899281586426/AnsiballZ_file.py'
Jan 23 11:42:30 compute-0 sudo[232054]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:42:30 compute-0 python3.9[232056]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:42:30 compute-0 sudo[232054]: pam_unix(sudo:session): session closed for user root
Jan 23 11:42:31 compute-0 openstack_network_exporter[204160]: ERROR   11:42:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 23 11:42:31 compute-0 openstack_network_exporter[204160]: 
Jan 23 11:42:31 compute-0 openstack_network_exporter[204160]: ERROR   11:42:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 23 11:42:31 compute-0 openstack_network_exporter[204160]: 
Jan 23 11:42:31 compute-0 sudo[232206]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nwaonhaivycaziupgtocodaxtktjejhn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168551.0811682-1016-160067079999832/AnsiballZ_stat.py'
Jan 23 11:42:31 compute-0 sudo[232206]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:42:31 compute-0 python3.9[232208]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:42:31 compute-0 sudo[232206]: pam_unix(sudo:session): session closed for user root
Jan 23 11:42:31 compute-0 sudo[232284]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kezpjukplkrgvyvfofjfkkigjmetmofz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168551.0811682-1016-160067079999832/AnsiballZ_file.py'
Jan 23 11:42:31 compute-0 sudo[232284]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:42:32 compute-0 python3.9[232286]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:42:32 compute-0 sudo[232284]: pam_unix(sudo:session): session closed for user root
Jan 23 11:42:32 compute-0 sudo[232451]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-doxselyinlqdixmnjzctudqhueydftaw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168552.42435-1028-218971412738441/AnsiballZ_stat.py'
Jan 23 11:42:32 compute-0 sudo[232451]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:42:32 compute-0 podman[232410]: 2026-01-23 11:42:32.80996671 +0000 UTC m=+0.065717782 container health_status 6ec039018dddd109dd56b3f3912ce4a80c166b5fb98c417c5e3cfbbdfbfbeaad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260120, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 23 11:42:32 compute-0 python3.9[232457]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:42:33 compute-0 sudo[232451]: pam_unix(sudo:session): session closed for user root
Jan 23 11:42:33 compute-0 sudo[232533]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wknascooqfjbkpfknbynhbwayvluxlqp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168552.42435-1028-218971412738441/AnsiballZ_file.py'
Jan 23 11:42:33 compute-0 sudo[232533]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:42:33 compute-0 python3.9[232535]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:42:33 compute-0 sudo[232533]: pam_unix(sudo:session): session closed for user root
Jan 23 11:42:34 compute-0 sudo[232685]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jeucgzxiyodkzdjjsmwchpzzzmkcutcq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168553.8134685-1040-137054797667833/AnsiballZ_stat.py'
Jan 23 11:42:34 compute-0 sudo[232685]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:42:34 compute-0 python3.9[232687]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:42:34 compute-0 sudo[232685]: pam_unix(sudo:session): session closed for user root
Jan 23 11:42:34 compute-0 sudo[232826]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fuesbjbvbmouixqdryrlfpmtivzxwyuf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168553.8134685-1040-137054797667833/AnsiballZ_copy.py'
Jan 23 11:42:34 compute-0 podman[232784]: 2026-01-23 11:42:34.968483899 +0000 UTC m=+0.065757142 container health_status 48bfd3e93cfb033a8917f154ab637a84f3f60f7609564292c230ce848bae7693 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Jan 23 11:42:34 compute-0 sudo[232826]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:42:35 compute-0 python3.9[232835]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769168553.8134685-1040-137054797667833/.source.nft follow=False _original_basename=ruleset.j2 checksum=b82fbd2c71bb7c36c630c2301913f0f42fd2e7ce backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:42:35 compute-0 sudo[232826]: pam_unix(sudo:session): session closed for user root
Jan 23 11:42:35 compute-0 sudo[232985]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ndtrfcadvdwfvxftwtbkgueibasikued ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168555.4125533-1055-256796030277579/AnsiballZ_file.py'
Jan 23 11:42:35 compute-0 sudo[232985]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:42:35 compute-0 python3.9[232987]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:42:35 compute-0 sudo[232985]: pam_unix(sudo:session): session closed for user root
Jan 23 11:42:36 compute-0 sudo[233155]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wyzuwkkpojimsywuvynvuwfukxwgcwey ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168556.2047315-1063-142617809498816/AnsiballZ_command.py'
Jan 23 11:42:36 compute-0 podman[233111]: 2026-01-23 11:42:36.605489632 +0000 UTC m=+0.075653260 container health_status d96827cd9c29e53bbdf4cef10942608e4ba405294733072b4aa624c0238e2ed8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Jan 23 11:42:36 compute-0 sudo[233155]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:42:36 compute-0 python3.9[233157]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 11:42:36 compute-0 sudo[233155]: pam_unix(sudo:session): session closed for user root
Jan 23 11:42:37 compute-0 sudo[233310]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mmldvxtlcwjqqqanpcygnfkyjytowyjd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168557.0967863-1071-132321702167079/AnsiballZ_blockinfile.py'
Jan 23 11:42:37 compute-0 sudo[233310]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:42:38 compute-0 python3.9[233312]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:42:38 compute-0 sudo[233310]: pam_unix(sudo:session): session closed for user root
Jan 23 11:42:38 compute-0 podman[233337]: 2026-01-23 11:42:38.772558185 +0000 UTC m=+0.107657349 container health_status 1cc877fed4914980324cf4c0d6ba23743fd113442cee4d49cc1a59e402757170 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller, container_name=ovn_controller)
Jan 23 11:42:39 compute-0 sudo[233488]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iwwqecyaehnrvlnwbawzkebwztkesisl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168558.741532-1080-243050347419269/AnsiballZ_command.py'
Jan 23 11:42:39 compute-0 sudo[233488]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:42:39 compute-0 python3.9[233490]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 11:42:39 compute-0 sudo[233488]: pam_unix(sudo:session): session closed for user root
Jan 23 11:42:39 compute-0 sudo[233641]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pgiehxjxjxkynqkyjpmujwmwelltaxcv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168559.5355346-1088-185236981654788/AnsiballZ_stat.py'
Jan 23 11:42:39 compute-0 sudo[233641]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:42:40 compute-0 python3.9[233643]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 11:42:40 compute-0 sudo[233641]: pam_unix(sudo:session): session closed for user root
Jan 23 11:42:40 compute-0 sudo[233795]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jomhwgsywibnfusfpasjkjsiaxbwajyf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168560.344875-1096-280650958392732/AnsiballZ_command.py'
Jan 23 11:42:40 compute-0 sudo[233795]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:42:40 compute-0 python3.9[233797]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 11:42:40 compute-0 sudo[233795]: pam_unix(sudo:session): session closed for user root
Jan 23 11:42:41 compute-0 sudo[233950]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wdjefmisuaosentizckgruxnagwkaczk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168561.076219-1104-162377443422160/AnsiballZ_file.py'
Jan 23 11:42:41 compute-0 sudo[233950]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:42:41 compute-0 python3.9[233952]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:42:41 compute-0 sudo[233950]: pam_unix(sudo:session): session closed for user root
Jan 23 11:42:42 compute-0 sshd-session[212976]: Connection closed by 192.168.122.30 port 43994
Jan 23 11:42:42 compute-0 sshd-session[212973]: pam_unix(sshd:session): session closed for user zuul
Jan 23 11:42:42 compute-0 systemd[1]: session-27.scope: Deactivated successfully.
Jan 23 11:42:42 compute-0 systemd[1]: session-27.scope: Consumed 1min 24.710s CPU time.
Jan 23 11:42:42 compute-0 systemd-logind[798]: Session 27 logged out. Waiting for processes to exit.
Jan 23 11:42:42 compute-0 systemd-logind[798]: Removed session 27.
Jan 23 11:42:43 compute-0 podman[233977]: 2026-01-23 11:42:43.764300627 +0000 UTC m=+0.101168497 container health_status adf529ba1b6aae11f18bcfacdd7f5850af0b6e6af2250d4a705be9c346f3f5af (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 23 11:42:45 compute-0 podman[233996]: 2026-01-23 11:42:45.741168801 +0000 UTC m=+0.068387358 container health_status 900ef841977ab427bb05b895d10e0cac749b9185cccc7bb7aaf2b3886aa6449a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release=1214.1726694543, container_name=kepler, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., managed_by=edpm_ansible, com.redhat.component=ubi9-container, name=ubi9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, version=9.4, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, vcs-type=git)
Jan 23 11:42:47 compute-0 sshd-session[234017]: Accepted publickey for zuul from 192.168.122.30 port 44098 ssh2: ECDSA SHA256:AUEDGm/wgPOySUg5KweIs4KJvJDZMkuE7T7y2BxO92Y
Jan 23 11:42:47 compute-0 systemd-logind[798]: New session 28 of user zuul.
Jan 23 11:42:47 compute-0 systemd[1]: Started Session 28 of User zuul.
Jan 23 11:42:47 compute-0 sshd-session[234017]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 23 11:42:48 compute-0 python3.9[234170]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 11:42:49 compute-0 podman[234252]: 2026-01-23 11:42:49.744038179 +0000 UTC m=+0.067621429 container health_status 99ee297e6e25b500e7af118e58bbafc761d2fd7202cdfcf4c976c2a99866b5ef (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 23 11:42:50 compute-0 sudo[234349]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ullhnckcwchlmlhgbmpprtdlhmvyswfy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168569.4346435-29-205048351708230/AnsiballZ_systemd.py'
Jan 23 11:42:50 compute-0 sudo[234349]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:42:50 compute-0 python3.9[234351]: ansible-ansible.builtin.systemd Invoked with name=rsyslog daemon_reload=False daemon_reexec=False scope=system no_block=False state=None enabled=None force=None masked=None
Jan 23 11:42:50 compute-0 sudo[234349]: pam_unix(sudo:session): session closed for user root
Jan 23 11:42:51 compute-0 sudo[234502]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uemartagagjcgxyvthsojaqgdoimlkdr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168570.7540877-37-230360208283127/AnsiballZ_setup.py'
Jan 23 11:42:51 compute-0 sudo[234502]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:42:51 compute-0 python3.9[234504]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 23 11:42:51 compute-0 sudo[234502]: pam_unix(sudo:session): session closed for user root
Jan 23 11:42:52 compute-0 sudo[234586]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bkgofxluiypmoiybfulqsglcrmgwifkn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168570.7540877-37-230360208283127/AnsiballZ_dnf.py'
Jan 23 11:42:52 compute-0 sudo[234586]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:42:52 compute-0 python3.9[234588]: ansible-ansible.legacy.dnf Invoked with name=['rsyslog-openssl'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 23 11:42:54 compute-0 podman[234595]: 2026-01-23 11:42:54.767132603 +0000 UTC m=+0.098578411 container health_status cde20f10ae383cce1365a41265bac0a75ea71c31a21a1539f187bef9d678e8d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, config_id=openstack_network_exporter, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, distribution-scope=public, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible)
Jan 23 11:42:54 compute-0 sudo[234586]: pam_unix(sudo:session): session closed for user root
Jan 23 11:42:55 compute-0 sudo[234764]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-osllyjjsmnurbpruwgdhflalrvofuuss ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168575.008526-49-83474277547349/AnsiballZ_stat.py'
Jan 23 11:42:55 compute-0 sudo[234764]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:42:55 compute-0 python3.9[234766]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/rsyslog/ca-openshift.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:42:55 compute-0 sudo[234764]: pam_unix(sudo:session): session closed for user root
Jan 23 11:42:56 compute-0 sudo[234887]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nrefuwwloxbrggdwidlikzxnkqayiokl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168575.008526-49-83474277547349/AnsiballZ_copy.py'
Jan 23 11:42:56 compute-0 sudo[234887]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:42:56 compute-0 python3.9[234889]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/rsyslog/ca-openshift.crt mode=0644 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1769168575.008526-49-83474277547349/.source.crt _original_basename=ca-openshift.crt follow=False checksum=1d88bab26da5c85710a770c705f3555781bf2a38 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:42:56 compute-0 sudo[234887]: pam_unix(sudo:session): session closed for user root
Jan 23 11:42:57 compute-0 sudo[235039]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dauvharqhjifobdenrokjzrwdgyetaah ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168576.7531404-64-108247968232062/AnsiballZ_file.py'
Jan 23 11:42:57 compute-0 sudo[235039]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:42:57 compute-0 python3.9[235041]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/rsyslog.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:42:57 compute-0 sudo[235039]: pam_unix(sudo:session): session closed for user root
Jan 23 11:42:58 compute-0 sudo[235191]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ajmdcbxedhuzhuavmxfxxiyaozppvgnc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168577.7355282-72-209425944551223/AnsiballZ_stat.py'
Jan 23 11:42:58 compute-0 sudo[235191]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:42:58 compute-0 python3.9[235193]: ansible-ansible.legacy.stat Invoked with path=/etc/rsyslog.d/10-telemetry.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 11:42:58 compute-0 sudo[235191]: pam_unix(sudo:session): session closed for user root
Jan 23 11:42:58 compute-0 sudo[235314]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qocsmqpovrxqvqcodsestvxsjwgfembf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168577.7355282-72-209425944551223/AnsiballZ_copy.py'
Jan 23 11:42:58 compute-0 sudo[235314]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:42:59 compute-0 python3.9[235316]: ansible-ansible.legacy.copy Invoked with dest=/etc/rsyslog.d/10-telemetry.conf mode=0644 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1769168577.7355282-72-209425944551223/.source.conf _original_basename=10-telemetry.conf follow=False checksum=76865d9dd4bf9cd322a47065c046bcac194645ab backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 11:42:59 compute-0 sudo[235314]: pam_unix(sudo:session): session closed for user root
Jan 23 11:42:59 compute-0 sudo[235466]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vzgvhvvqtvqdlkamionqllrcbkcdaecx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769168579.2787762-87-131846041500834/AnsiballZ_systemd.py'
Jan 23 11:42:59 compute-0 sudo[235466]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:42:59 compute-0 podman[201022]: time="2026-01-23T11:42:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 23 11:42:59 compute-0 podman[201022]: @ - - [23/Jan/2026:11:42:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 23 11:42:59 compute-0 podman[201022]: @ - - [23/Jan/2026:11:42:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3845 "" "Go-http-client/1.1"
Jan 23 11:42:59 compute-0 python3.9[235468]: ansible-ansible.builtin.systemd Invoked with name=rsyslog.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 23 11:42:59 compute-0 systemd[1]: Stopping System Logging Service...
Jan 23 11:43:00 compute-0 rsyslogd[1006]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1006" x-info="https://www.rsyslog.com"] exiting on signal 15.
Jan 23 11:43:00 compute-0 systemd[1]: rsyslog.service: Deactivated successfully.
Jan 23 11:43:00 compute-0 systemd[1]: Stopped System Logging Service.
Jan 23 11:43:00 compute-0 systemd[1]: rsyslog.service: Consumed 3.622s CPU time, 9.0M memory peak, read 0B from disk, written 5.9M to disk.
Jan 23 11:43:00 compute-0 systemd[1]: Starting System Logging Service...
Jan 23 11:43:00 compute-0 rsyslogd[235472]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="235472" x-info="https://www.rsyslog.com"] start
Jan 23 11:43:00 compute-0 systemd[1]: Started System Logging Service.
Jan 23 11:43:00 compute-0 rsyslogd[235472]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 23 11:43:00 compute-0 rsyslogd[235472]: Warning: Certificate file is not set [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2330 ]
Jan 23 11:43:00 compute-0 rsyslogd[235472]: Warning: Key file is not set [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2331 ]
Jan 23 11:43:00 compute-0 rsyslogd[235472]: nsd_ossl: TLS Connection initiated with remote syslog server '172.17.0.80'. [v8.2510.0-2.el9]
Jan 23 11:43:00 compute-0 sudo[235466]: pam_unix(sudo:session): session closed for user root
Jan 23 11:43:00 compute-0 rsyslogd[235472]: nsd_ossl: Information, no shared curve between syslog client '172.17.0.80' and server [v8.2510.0-2.el9]
Jan 23 11:43:00 compute-0 sshd-session[234020]: Connection closed by 192.168.122.30 port 44098
Jan 23 11:43:00 compute-0 sshd-session[234017]: pam_unix(sshd:session): session closed for user zuul
Jan 23 11:43:00 compute-0 systemd[1]: session-28.scope: Deactivated successfully.
Jan 23 11:43:00 compute-0 systemd[1]: session-28.scope: Consumed 9.598s CPU time.
Jan 23 11:43:00 compute-0 systemd-logind[798]: Session 28 logged out. Waiting for processes to exit.
Jan 23 11:43:00 compute-0 systemd-logind[798]: Removed session 28.
Jan 23 11:43:01 compute-0 nova_compute[185173]: 2026-01-23 11:43:01.236 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:43:01 compute-0 nova_compute[185173]: 2026-01-23 11:43:01.236 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 23 11:43:01 compute-0 nova_compute[185173]: 2026-01-23 11:43:01.260 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 23 11:43:01 compute-0 nova_compute[185173]: 2026-01-23 11:43:01.261 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:43:01 compute-0 nova_compute[185173]: 2026-01-23 11:43:01.261 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 23 11:43:01 compute-0 nova_compute[185173]: 2026-01-23 11:43:01.278 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:43:01 compute-0 openstack_network_exporter[204160]: ERROR   11:43:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 23 11:43:01 compute-0 openstack_network_exporter[204160]: 
Jan 23 11:43:01 compute-0 openstack_network_exporter[204160]: ERROR   11:43:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 23 11:43:01 compute-0 openstack_network_exporter[204160]: 
Jan 23 11:43:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:43:01.449 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 23 11:43:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:43:01.450 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 23 11:43:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:43:01.450 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283be295e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:43:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:43:01.451 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f28410bc7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:43:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:43:01.451 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be810>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283be295e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:43:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:43:01.451 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be840>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283be295e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:43:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:43:01.451 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283be295e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:43:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:43:01.451 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be8a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283be295e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:43:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:43:01.452 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc8f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283be295e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:43:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:43:01.452 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be900>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283be295e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:43:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:43:01.452 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bf140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283be295e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:43:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:43:01.452 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be960>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283be295e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:43:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:43:01.452 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2842f61190>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283be295e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:43:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:43:01.452 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28411c9190>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283be295e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:43:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:43:01.452 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be9c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283be295e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:43:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:43:01.452 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bf1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283be295e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:43:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:43:01.452 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bec00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283be295e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:43:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:43:01.452 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bf440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283be295e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:43:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:43:01.452 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bec60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283be295e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:43:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:43:01.452 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2842f83560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283be295e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:43:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:43:01.453 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283be295e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:43:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:43:01.453 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc5c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283be295e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:43:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:43:01.453 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283be295e0>] with cache [{}], pollster history [{'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:43:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:43:01.453 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be660>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283be295e0>] with cache [{}], pollster history [{'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:43:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:43:01.453 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283be295e0>] with cache [{}], pollster history [{'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:43:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:43:01.453 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283be295e0>] with cache [{}], pollster history [{'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:43:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:43:01.453 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 11:43:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:43:01.453 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2842f1af60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283be295e0>] with cache [{}], pollster history [{'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:43:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:43:01.454 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283be295e0>] with cache [{}], pollster history [{'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:43:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:43:01.454 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be7b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283be295e0>] with cache [{}], pollster history [{'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:43:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:43:01.454 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f28410be7e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:43:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:43:01.454 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 11:43:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:43:01.454 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f28411c9b80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:43:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:43:01.455 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 11:43:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:43:01.455 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f28410bc830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:43:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:43:01.455 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 11:43:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:43:01.455 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f28410be870>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:43:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:43:01.455 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 11:43:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:43:01.455 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f28410bc8c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:43:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:43:01.455 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 11:43:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:43:01.455 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f28410be8d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:43:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:43:01.456 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 11:43:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:43:01.456 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f28410bef30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:43:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:43:01.456 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 11:43:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:43:01.456 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f28410be930>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:43:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:43:01.456 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 11:43:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:43:01.456 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f28410be750>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:43:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:43:01.456 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 11:43:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:43:01.456 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f28411a4c50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:43:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:43:01.456 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 11:43:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:43:01.456 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f28410be990>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:43:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:43:01.456 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 11:43:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:43:01.456 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f28410bf1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:43:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:43:01.457 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 11:43:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:43:01.457 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f28410bebd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:43:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:43:01.457 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 11:43:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:43:01.457 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f28410bf410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:43:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:43:01.457 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 11:43:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:43:01.457 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f28410bec30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:43:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:43:01.457 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 11:43:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:43:01.457 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f28410bcfb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:43:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:43:01.457 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 11:43:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:43:01.457 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f28410bc920>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:43:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:43:01.458 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 11:43:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:43:01.458 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f28410bc5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:43:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:43:01.458 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 11:43:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:43:01.458 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f28410bc890>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:43:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:43:01.458 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 11:43:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:43:01.458 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f28410be720>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:43:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:43:01.458 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 11:43:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:43:01.458 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f28410bc6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:43:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:43:01.458 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 11:43:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:43:01.458 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f28410bec90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:43:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:43:01.458 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 11:43:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:43:01.459 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f284322b260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:43:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:43:01.459 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 11:43:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:43:01.459 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f28410bc740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:43:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:43:01.459 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 11:43:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:43:01.459 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f28410be780>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:43:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:43:01.459 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 11:43:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:43:01.459 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:43:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:43:01.460 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:43:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:43:01.460 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:43:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:43:01.460 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:43:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:43:01.460 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:43:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:43:01.460 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:43:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:43:01.460 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:43:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:43:01.460 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:43:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:43:01.460 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:43:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:43:01.460 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:43:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:43:01.460 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:43:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:43:01.460 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:43:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:43:01.460 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:43:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:43:01.460 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:43:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:43:01.460 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:43:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:43:01.461 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:43:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:43:01.461 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:43:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:43:01.461 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:43:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:43:01.461 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:43:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:43:01.461 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:43:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:43:01.461 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:43:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:43:01.461 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:43:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:43:01.461 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:43:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:43:01.461 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:43:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:43:01.461 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:43:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:43:01.461 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:43:02 compute-0 nova_compute[185173]: 2026-01-23 11:43:02.293 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:43:03 compute-0 nova_compute[185173]: 2026-01-23 11:43:03.235 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:43:03 compute-0 nova_compute[185173]: 2026-01-23 11:43:03.236 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:43:03 compute-0 nova_compute[185173]: 2026-01-23 11:43:03.237 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:43:03 compute-0 nova_compute[185173]: 2026-01-23 11:43:03.269 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 11:43:03 compute-0 nova_compute[185173]: 2026-01-23 11:43:03.270 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 11:43:03 compute-0 nova_compute[185173]: 2026-01-23 11:43:03.271 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 11:43:03 compute-0 nova_compute[185173]: 2026-01-23 11:43:03.271 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 23 11:43:03 compute-0 nova_compute[185173]: 2026-01-23 11:43:03.600 185177 WARNING nova.virt.libvirt.driver [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 23 11:43:03 compute-0 nova_compute[185173]: 2026-01-23 11:43:03.601 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5725MB free_disk=72.47686386108398GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 23 11:43:03 compute-0 nova_compute[185173]: 2026-01-23 11:43:03.601 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 11:43:03 compute-0 nova_compute[185173]: 2026-01-23 11:43:03.601 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 11:43:03 compute-0 nova_compute[185173]: 2026-01-23 11:43:03.709 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 23 11:43:03 compute-0 nova_compute[185173]: 2026-01-23 11:43:03.710 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 23 11:43:03 compute-0 podman[235502]: 2026-01-23 11:43:03.756115709 +0000 UTC m=+0.081104104 container health_status 6ec039018dddd109dd56b3f3912ce4a80c166b5fb98c417c5e3cfbbdfbfbeaad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20260120, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute)
Jan 23 11:43:03 compute-0 nova_compute[185173]: 2026-01-23 11:43:03.759 185177 DEBUG nova.scheduler.client.report [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Refreshing inventories for resource provider 77dd020c-2f5c-40b0-b660-8a95a28aabbd _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 23 11:43:03 compute-0 nova_compute[185173]: 2026-01-23 11:43:03.775 185177 DEBUG nova.scheduler.client.report [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Updating ProviderTree inventory for provider 77dd020c-2f5c-40b0-b660-8a95a28aabbd from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 23 11:43:03 compute-0 nova_compute[185173]: 2026-01-23 11:43:03.776 185177 DEBUG nova.compute.provider_tree [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Updating inventory in ProviderTree for provider 77dd020c-2f5c-40b0-b660-8a95a28aabbd with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 23 11:43:03 compute-0 nova_compute[185173]: 2026-01-23 11:43:03.789 185177 DEBUG nova.scheduler.client.report [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Refreshing aggregate associations for resource provider 77dd020c-2f5c-40b0-b660-8a95a28aabbd, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 23 11:43:03 compute-0 nova_compute[185173]: 2026-01-23 11:43:03.815 185177 DEBUG nova.scheduler.client.report [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Refreshing trait associations for resource provider 77dd020c-2f5c-40b0-b660-8a95a28aabbd, traits: HW_CPU_X86_F16C,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_IDE,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_TRUSTED_CERTS,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_BMI,HW_CPU_X86_FMA3,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE42,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_ABM,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_AVX2,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_AVX,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SSE,HW_CPU_X86_BMI2,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE4A,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_MMX,HW_CPU_X86_SSE41,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_USB _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 23 11:43:03 compute-0 nova_compute[185173]: 2026-01-23 11:43:03.840 185177 DEBUG nova.compute.provider_tree [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Inventory has not changed in ProviderTree for provider: 77dd020c-2f5c-40b0-b660-8a95a28aabbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 23 11:43:03 compute-0 nova_compute[185173]: 2026-01-23 11:43:03.857 185177 DEBUG nova.scheduler.client.report [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Inventory has not changed for provider 77dd020c-2f5c-40b0-b660-8a95a28aabbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 23 11:43:03 compute-0 nova_compute[185173]: 2026-01-23 11:43:03.860 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 23 11:43:03 compute-0 nova_compute[185173]: 2026-01-23 11:43:03.861 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.260s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 11:43:04 compute-0 nova_compute[185173]: 2026-01-23 11:43:04.860 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:43:04 compute-0 nova_compute[185173]: 2026-01-23 11:43:04.861 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 23 11:43:04 compute-0 nova_compute[185173]: 2026-01-23 11:43:04.861 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 23 11:43:04 compute-0 nova_compute[185173]: 2026-01-23 11:43:04.877 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 23 11:43:04 compute-0 nova_compute[185173]: 2026-01-23 11:43:04.878 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:43:05 compute-0 nova_compute[185173]: 2026-01-23 11:43:05.234 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:43:05 compute-0 nova_compute[185173]: 2026-01-23 11:43:05.235 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:43:05 compute-0 podman[235521]: 2026-01-23 11:43:05.735260205 +0000 UTC m=+0.069068246 container health_status 48bfd3e93cfb033a8917f154ab637a84f3f60f7609564292c230ce848bae7693 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 23 11:43:06 compute-0 nova_compute[185173]: 2026-01-23 11:43:06.235 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:43:06 compute-0 nova_compute[185173]: 2026-01-23 11:43:06.236 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 23 11:43:06 compute-0 podman[235543]: 2026-01-23 11:43:06.733716688 +0000 UTC m=+0.061502552 container health_status d96827cd9c29e53bbdf4cef10942608e4ba405294733072b4aa624c0238e2ed8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 23 11:43:09 compute-0 podman[235561]: 2026-01-23 11:43:09.790500833 +0000 UTC m=+0.123181339 container health_status 1cc877fed4914980324cf4c0d6ba23743fd113442cee4d49cc1a59e402757170 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 23 11:43:14 compute-0 podman[235586]: 2026-01-23 11:43:14.731917334 +0000 UTC m=+0.073012457 container health_status adf529ba1b6aae11f18bcfacdd7f5850af0b6e6af2250d4a705be9c346f3f5af (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Jan 23 11:43:16 compute-0 podman[235605]: 2026-01-23 11:43:16.753329641 +0000 UTC m=+0.083364122 container health_status 900ef841977ab427bb05b895d10e0cac749b9185cccc7bb7aaf2b3886aa6449a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=kepler, distribution-scope=public, managed_by=edpm_ansible, version=9.4, architecture=x86_64, release=1214.1726694543, release-0.7.12=, container_name=kepler, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, io.openshift.expose-services=, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 23 11:43:20 compute-0 podman[235625]: 2026-01-23 11:43:20.713799746 +0000 UTC m=+0.051706032 container health_status 99ee297e6e25b500e7af118e58bbafc761d2fd7202cdfcf4c976c2a99866b5ef (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 23 11:43:25 compute-0 podman[235649]: 2026-01-23 11:43:25.776664334 +0000 UTC m=+0.102077520 container health_status cde20f10ae383cce1365a41265bac0a75ea71c31a21a1539f187bef9d678e8d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2025-08-20T13:12:41, distribution-scope=public, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=openstack_network_exporter, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, managed_by=edpm_ansible, release=1755695350)
Jan 23 11:43:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:43:29.084 106832 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 11:43:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:43:29.084 106832 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 11:43:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:43:29.084 106832 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 11:43:29 compute-0 sshd-session[235669]: Accepted publickey for zuul from 38.102.83.196 port 36292 ssh2: RSA SHA256:l5/z7/B1LZInfKNQYpI40S/PX6fnGwoDdxTfZ/2+PpU
Jan 23 11:43:29 compute-0 systemd-logind[798]: New session 29 of user zuul.
Jan 23 11:43:29 compute-0 systemd[1]: Started Session 29 of User zuul.
Jan 23 11:43:29 compute-0 sshd-session[235669]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 23 11:43:29 compute-0 podman[201022]: time="2026-01-23T11:43:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 23 11:43:29 compute-0 podman[201022]: @ - - [23/Jan/2026:11:43:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 23 11:43:29 compute-0 podman[201022]: @ - - [23/Jan/2026:11:43:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3848 "" "Go-http-client/1.1"
Jan 23 11:43:30 compute-0 python3[235846]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 11:43:31 compute-0 openstack_network_exporter[204160]: ERROR   11:43:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 23 11:43:31 compute-0 openstack_network_exporter[204160]: 
Jan 23 11:43:31 compute-0 openstack_network_exporter[204160]: ERROR   11:43:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 23 11:43:31 compute-0 openstack_network_exporter[204160]: 
Jan 23 11:43:32 compute-0 sudo[236067]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hwpxuydypflvoyrahsdddgvizvfxvijr ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769168611.7487462-36876-216295193712755/AnsiballZ_command.py'
Jan 23 11:43:32 compute-0 sudo[236067]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:43:32 compute-0 python3[236069]: ansible-ansible.legacy.command Invoked with _raw_params=tstamp=$(date -d '30 minute ago' "+%Y-%m-%d %H:%M:%S")
                                           journalctl -t "ceilometer_agent_compute" --no-pager -S "${tstamp}"
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 11:43:32 compute-0 sudo[236067]: pam_unix(sudo:session): session closed for user root
Jan 23 11:43:33 compute-0 sudo[236220]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-seonpdchkoceusxitngfapaqxydbzsrb ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769168612.773761-36887-21209958745747/AnsiballZ_command.py'
Jan 23 11:43:33 compute-0 sudo[236220]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:43:33 compute-0 python3[236222]: ansible-ansible.legacy.command Invoked with _raw_params=tstamp=$(date -d '30 minute ago' "+%Y-%m-%d %H:%M:%S")
                                           journalctl -t "nova_compute" --no-pager -S "${tstamp}"
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 11:43:34 compute-0 podman[236225]: 2026-01-23 11:43:34.748240715 +0000 UTC m=+0.075131791 container health_status 6ec039018dddd109dd56b3f3912ce4a80c166b5fb98c417c5e3cfbbdfbfbeaad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260120, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ceilometer_agent_compute)
Jan 23 11:43:34 compute-0 sudo[236220]: pam_unix(sudo:session): session closed for user root
Jan 23 11:43:35 compute-0 podman[236367]: 2026-01-23 11:43:35.90952913 +0000 UTC m=+0.062782726 container health_status 48bfd3e93cfb033a8917f154ab637a84f3f60f7609564292c230ce848bae7693 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Jan 23 11:43:36 compute-0 python3[236413]: ansible-ansible.builtin.stat Invoked with path=/etc/rsyslog.d/10-telemetry.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 23 11:43:36 compute-0 sudo[236584]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-slsnvqkjehlerqxxlfccfbsesmowfapy ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769168616.5716257-36931-197081063296719/AnsiballZ_setup.py'
Jan 23 11:43:36 compute-0 sudo[236584]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:43:36 compute-0 podman[236541]: 2026-01-23 11:43:36.966686906 +0000 UTC m=+0.096435546 container health_status d96827cd9c29e53bbdf4cef10942608e4ba405294733072b4aa624c0238e2ed8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 23 11:43:37 compute-0 python3[236588]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 11:43:38 compute-0 sudo[236584]: pam_unix(sudo:session): session closed for user root
Jan 23 11:43:39 compute-0 sudo[236811]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fzlqhfalpbbpwdanqcohzhdevpbbzidc ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769168618.8533692-36961-224778849070058/AnsiballZ_command.py'
Jan 23 11:43:39 compute-0 sudo[236811]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:43:39 compute-0 python3[236813]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep ceilometer_agent_compute
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 11:43:39 compute-0 sudo[236811]: pam_unix(sudo:session): session closed for user root
Jan 23 11:43:40 compute-0 sudo[236990]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jnelhfdmcgezqslsufxjbbflwxaadnaj ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769168619.9389002-36978-186085808690770/AnsiballZ_command.py'
Jan 23 11:43:40 compute-0 sudo[236990]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 11:43:40 compute-0 podman[236952]: 2026-01-23 11:43:40.43158156 +0000 UTC m=+0.128937956 container health_status 1cc877fed4914980324cf4c0d6ba23743fd113442cee4d49cc1a59e402757170 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_managed=true, managed_by=edpm_ansible, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 23 11:43:40 compute-0 python3[236996]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep node_exporter
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 11:43:40 compute-0 sudo[236990]: pam_unix(sudo:session): session closed for user root
Jan 23 11:43:45 compute-0 podman[237044]: 2026-01-23 11:43:45.73095104 +0000 UTC m=+0.066400428 container health_status adf529ba1b6aae11f18bcfacdd7f5850af0b6e6af2250d4a705be9c346f3f5af (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 23 11:43:47 compute-0 podman[237064]: 2026-01-23 11:43:47.748129657 +0000 UTC m=+0.077269495 container health_status 900ef841977ab427bb05b895d10e0cac749b9185cccc7bb7aaf2b3886aa6449a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release=1214.1726694543, io.buildah.version=1.29.0, name=ubi9, vendor=Red Hat, Inc., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=kepler, release-0.7.12=, version=9.4, architecture=x86_64, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, container_name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, maintainer=Red Hat, Inc., managed_by=edpm_ansible)
Jan 23 11:43:51 compute-0 podman[237084]: 2026-01-23 11:43:51.72372222 +0000 UTC m=+0.058879565 container health_status 99ee297e6e25b500e7af118e58bbafc761d2fd7202cdfcf4c976c2a99866b5ef (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 23 11:43:56 compute-0 podman[237108]: 2026-01-23 11:43:56.736944667 +0000 UTC m=+0.071581300 container health_status cde20f10ae383cce1365a41265bac0a75ea71c31a21a1539f187bef9d678e8d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, vcs-type=git, io.buildah.version=1.33.7, name=ubi9-minimal, vendor=Red Hat, Inc., distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., architecture=x86_64, config_id=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Jan 23 11:43:59 compute-0 podman[201022]: time="2026-01-23T11:43:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 23 11:43:59 compute-0 podman[201022]: @ - - [23/Jan/2026:11:43:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 23 11:43:59 compute-0 podman[201022]: @ - - [23/Jan/2026:11:43:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3850 "" "Go-http-client/1.1"
Jan 23 11:44:01 compute-0 openstack_network_exporter[204160]: ERROR   11:44:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 23 11:44:01 compute-0 openstack_network_exporter[204160]: 
Jan 23 11:44:01 compute-0 openstack_network_exporter[204160]: ERROR   11:44:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 23 11:44:01 compute-0 openstack_network_exporter[204160]: 
Jan 23 11:44:02 compute-0 nova_compute[185173]: 2026-01-23 11:44:02.235 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:44:03 compute-0 nova_compute[185173]: 2026-01-23 11:44:03.235 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:44:04 compute-0 nova_compute[185173]: 2026-01-23 11:44:04.236 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:44:04 compute-0 nova_compute[185173]: 2026-01-23 11:44:04.279 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 11:44:04 compute-0 nova_compute[185173]: 2026-01-23 11:44:04.280 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 11:44:04 compute-0 nova_compute[185173]: 2026-01-23 11:44:04.281 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 11:44:04 compute-0 nova_compute[185173]: 2026-01-23 11:44:04.282 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 23 11:44:04 compute-0 nova_compute[185173]: 2026-01-23 11:44:04.628 185177 WARNING nova.virt.libvirt.driver [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 23 11:44:04 compute-0 nova_compute[185173]: 2026-01-23 11:44:04.629 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5738MB free_disk=72.47674179077148GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 23 11:44:04 compute-0 nova_compute[185173]: 2026-01-23 11:44:04.629 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 11:44:04 compute-0 nova_compute[185173]: 2026-01-23 11:44:04.629 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 11:44:04 compute-0 nova_compute[185173]: 2026-01-23 11:44:04.688 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 23 11:44:04 compute-0 nova_compute[185173]: 2026-01-23 11:44:04.689 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 23 11:44:04 compute-0 nova_compute[185173]: 2026-01-23 11:44:04.807 185177 DEBUG nova.compute.provider_tree [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Inventory has not changed in ProviderTree for provider: 77dd020c-2f5c-40b0-b660-8a95a28aabbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 23 11:44:04 compute-0 nova_compute[185173]: 2026-01-23 11:44:04.824 185177 DEBUG nova.scheduler.client.report [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Inventory has not changed for provider 77dd020c-2f5c-40b0-b660-8a95a28aabbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 23 11:44:04 compute-0 nova_compute[185173]: 2026-01-23 11:44:04.826 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 23 11:44:04 compute-0 nova_compute[185173]: 2026-01-23 11:44:04.826 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.197s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 11:44:05 compute-0 podman[237128]: 2026-01-23 11:44:05.753380071 +0000 UTC m=+0.082539680 container health_status 6ec039018dddd109dd56b3f3912ce4a80c166b5fb98c417c5e3cfbbdfbfbeaad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20260120, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, tcib_managed=true, io.buildah.version=1.41.4, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, org.label-schema.license=GPLv2)
Jan 23 11:44:05 compute-0 nova_compute[185173]: 2026-01-23 11:44:05.821 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:44:05 compute-0 nova_compute[185173]: 2026-01-23 11:44:05.822 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:44:05 compute-0 nova_compute[185173]: 2026-01-23 11:44:05.822 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:44:06 compute-0 nova_compute[185173]: 2026-01-23 11:44:06.235 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:44:06 compute-0 nova_compute[185173]: 2026-01-23 11:44:06.236 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 23 11:44:06 compute-0 nova_compute[185173]: 2026-01-23 11:44:06.236 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 23 11:44:06 compute-0 nova_compute[185173]: 2026-01-23 11:44:06.249 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 23 11:44:06 compute-0 nova_compute[185173]: 2026-01-23 11:44:06.250 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:44:06 compute-0 nova_compute[185173]: 2026-01-23 11:44:06.251 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:44:06 compute-0 nova_compute[185173]: 2026-01-23 11:44:06.251 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 23 11:44:06 compute-0 podman[237147]: 2026-01-23 11:44:06.774868973 +0000 UTC m=+0.108469410 container health_status 48bfd3e93cfb033a8917f154ab637a84f3f60f7609564292c230ce848bae7693 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 23 11:44:07 compute-0 nova_compute[185173]: 2026-01-23 11:44:07.246 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:44:07 compute-0 podman[237170]: 2026-01-23 11:44:07.722853014 +0000 UTC m=+0.053561344 container health_status d96827cd9c29e53bbdf4cef10942608e4ba405294733072b4aa624c0238e2ed8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent)
Jan 23 11:44:10 compute-0 podman[237189]: 2026-01-23 11:44:10.780405611 +0000 UTC m=+0.111581738 container health_status 1cc877fed4914980324cf4c0d6ba23743fd113442cee4d49cc1a59e402757170 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 23 11:44:16 compute-0 podman[237216]: 2026-01-23 11:44:16.751742129 +0000 UTC m=+0.077890203 container health_status adf529ba1b6aae11f18bcfacdd7f5850af0b6e6af2250d4a705be9c346f3f5af (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_ipmi, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 23 11:44:18 compute-0 podman[237236]: 2026-01-23 11:44:18.787765125 +0000 UTC m=+0.112654527 container health_status 900ef841977ab427bb05b895d10e0cac749b9185cccc7bb7aaf2b3886aa6449a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, name=ubi9, distribution-scope=public, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, io.openshift.tags=base rhel9, config_id=kepler, io.buildah.version=1.29.0, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, managed_by=edpm_ansible, release=1214.1726694543, architecture=x86_64, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, io.openshift.expose-services=, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Jan 23 11:44:22 compute-0 podman[237256]: 2026-01-23 11:44:22.728351393 +0000 UTC m=+0.064338677 container health_status 99ee297e6e25b500e7af118e58bbafc761d2fd7202cdfcf4c976c2a99866b5ef (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 23 11:44:27 compute-0 podman[237280]: 2026-01-23 11:44:27.756575906 +0000 UTC m=+0.077867942 container health_status cde20f10ae383cce1365a41265bac0a75ea71c31a21a1539f187bef9d678e8d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, release=1755695350, vcs-type=git, name=ubi9-minimal, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., version=9.6, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container)
Jan 23 11:44:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:44:29.085 106832 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 11:44:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:44:29.085 106832 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 11:44:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:44:29.086 106832 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 11:44:29 compute-0 podman[201022]: time="2026-01-23T11:44:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 23 11:44:29 compute-0 podman[201022]: @ - - [23/Jan/2026:11:44:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 23 11:44:29 compute-0 podman[201022]: @ - - [23/Jan/2026:11:44:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3849 "" "Go-http-client/1.1"
Jan 23 11:44:31 compute-0 openstack_network_exporter[204160]: ERROR   11:44:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 23 11:44:31 compute-0 openstack_network_exporter[204160]: 
Jan 23 11:44:31 compute-0 openstack_network_exporter[204160]: ERROR   11:44:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 23 11:44:31 compute-0 openstack_network_exporter[204160]: 
Jan 23 11:44:36 compute-0 podman[237301]: 2026-01-23 11:44:36.786359294 +0000 UTC m=+0.115438208 container health_status 6ec039018dddd109dd56b3f3912ce4a80c166b5fb98c417c5e3cfbbdfbfbeaad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260120, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 23 11:44:36 compute-0 podman[237319]: 2026-01-23 11:44:36.92147241 +0000 UTC m=+0.097813209 container health_status 48bfd3e93cfb033a8917f154ab637a84f3f60f7609564292c230ce848bae7693 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 23 11:44:38 compute-0 podman[237342]: 2026-01-23 11:44:38.75003446 +0000 UTC m=+0.083014273 container health_status d96827cd9c29e53bbdf4cef10942608e4ba405294733072b4aa624c0238e2ed8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 23 11:44:40 compute-0 sshd-session[235672]: Received disconnect from 38.102.83.196 port 36292:11: disconnected by user
Jan 23 11:44:40 compute-0 sshd-session[235672]: Disconnected from user zuul 38.102.83.196 port 36292
Jan 23 11:44:40 compute-0 sshd-session[235669]: pam_unix(sshd:session): session closed for user zuul
Jan 23 11:44:40 compute-0 systemd[1]: session-29.scope: Deactivated successfully.
Jan 23 11:44:40 compute-0 systemd[1]: session-29.scope: Consumed 8.662s CPU time.
Jan 23 11:44:40 compute-0 systemd-logind[798]: Session 29 logged out. Waiting for processes to exit.
Jan 23 11:44:40 compute-0 systemd-logind[798]: Removed session 29.
Jan 23 11:44:41 compute-0 podman[237361]: 2026-01-23 11:44:41.813680693 +0000 UTC m=+0.145649766 container health_status 1cc877fed4914980324cf4c0d6ba23743fd113442cee4d49cc1a59e402757170 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 23 11:44:47 compute-0 podman[237387]: 2026-01-23 11:44:47.774356211 +0000 UTC m=+0.099691236 container health_status adf529ba1b6aae11f18bcfacdd7f5850af0b6e6af2250d4a705be9c346f3f5af (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team)
Jan 23 11:44:49 compute-0 podman[237409]: 2026-01-23 11:44:49.772219517 +0000 UTC m=+0.097723146 container health_status 900ef841977ab427bb05b895d10e0cac749b9185cccc7bb7aaf2b3886aa6449a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., managed_by=edpm_ansible, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release=1214.1726694543, container_name=kepler, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, distribution-scope=public, version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, com.redhat.component=ubi9-container, config_id=kepler, io.openshift.tags=base rhel9, vcs-type=git, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.openshift.expose-services=)
Jan 23 11:44:53 compute-0 podman[237428]: 2026-01-23 11:44:53.756250508 +0000 UTC m=+0.081838532 container health_status 99ee297e6e25b500e7af118e58bbafc761d2fd7202cdfcf4c976c2a99866b5ef (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 23 11:44:58 compute-0 podman[237452]: 2026-01-23 11:44:58.752057725 +0000 UTC m=+0.080897619 container health_status cde20f10ae383cce1365a41265bac0a75ea71c31a21a1539f187bef9d678e8d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, vcs-type=git, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, release=1755695350, config_id=openstack_network_exporter, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Jan 23 11:44:59 compute-0 podman[201022]: time="2026-01-23T11:44:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 23 11:44:59 compute-0 podman[201022]: @ - - [23/Jan/2026:11:44:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 23 11:44:59 compute-0 podman[201022]: @ - - [23/Jan/2026:11:44:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3850 "" "Go-http-client/1.1"
Jan 23 11:45:01 compute-0 openstack_network_exporter[204160]: ERROR   11:45:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 23 11:45:01 compute-0 openstack_network_exporter[204160]: 
Jan 23 11:45:01 compute-0 openstack_network_exporter[204160]: ERROR   11:45:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 23 11:45:01 compute-0 openstack_network_exporter[204160]: 
Jan 23 11:45:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:45:01.450 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 23 11:45:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:45:01.451 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 23 11:45:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:45:01.451 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283be295e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:45:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:45:01.451 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f28410bc7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:45:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:45:01.452 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be810>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283be295e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:45:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:45:01.452 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be840>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283be295e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:45:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:45:01.452 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283be295e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:45:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:45:01.452 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be8a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283be295e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:45:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:45:01.452 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc8f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283be295e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:45:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:45:01.452 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be900>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283be295e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:45:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:45:01.452 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bf140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283be295e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:45:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:45:01.453 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be960>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283be295e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:45:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:45:01.453 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2842f61190>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283be295e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:45:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:45:01.453 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28411c9190>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283be295e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:45:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:45:01.453 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be9c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283be295e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:45:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:45:01.453 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bf1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283be295e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:45:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:45:01.453 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bec00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283be295e0>] with cache [{}], pollster history [{'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:45:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:45:01.454 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bf440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283be295e0>] with cache [{}], pollster history [{'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:45:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:45:01.454 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bec60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283be295e0>] with cache [{}], pollster history [{'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:45:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:45:01.454 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2842f83560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283be295e0>] with cache [{}], pollster history [{'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:45:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:45:01.454 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283be295e0>] with cache [{}], pollster history [{'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:45:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:45:01.454 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc5c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283be295e0>] with cache [{}], pollster history [{'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:45:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:45:01.454 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283be295e0>] with cache [{}], pollster history [{'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:45:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:45:01.453 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 11:45:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:45:01.454 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f28410be7e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:45:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:45:01.454 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 11:45:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:45:01.455 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f28411c9b80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:45:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:45:01.455 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 11:45:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:45:01.455 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f28410bc830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:45:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:45:01.455 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 11:45:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:45:01.454 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be660>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283be295e0>] with cache [{}], pollster history [{'network.outgoing.bytes.delta': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'network.outgoing.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:45:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:45:01.455 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283be295e0>] with cache [{}], pollster history [{'network.outgoing.bytes.delta': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'network.outgoing.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:45:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:45:01.455 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283be295e0>] with cache [{}], pollster history [{'network.outgoing.bytes.delta': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'network.outgoing.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:45:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:45:01.456 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2842f1af60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283be295e0>] with cache [{}], pollster history [{'network.outgoing.bytes.delta': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'network.outgoing.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:45:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:45:01.456 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283be295e0>] with cache [{}], pollster history [{'network.outgoing.bytes.delta': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'network.outgoing.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:45:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:45:01.455 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f28410be870>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:45:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:45:01.456 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be7b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283be295e0>] with cache [{}], pollster history [{'network.outgoing.bytes.delta': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'network.outgoing.bytes.rate': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:45:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:45:01.456 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 11:45:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:45:01.456 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f28410bc8c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:45:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:45:01.456 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 11:45:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:45:01.457 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f28410be8d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:45:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:45:01.457 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 11:45:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:45:01.457 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f28410bef30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:45:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:45:01.457 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 11:45:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:45:01.457 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f28410be930>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:45:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:45:01.457 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 11:45:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:45:01.457 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f28410be750>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:45:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:45:01.457 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 11:45:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:45:01.457 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f28411a4c50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:45:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:45:01.457 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 11:45:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:45:01.457 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f28410be990>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:45:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:45:01.457 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 11:45:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:45:01.458 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f28410bf1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:45:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:45:01.458 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 11:45:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:45:01.458 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f28410bebd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:45:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:45:01.458 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 11:45:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:45:01.458 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f28410bf410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:45:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:45:01.458 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 11:45:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:45:01.458 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f28410bec30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:45:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:45:01.458 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 11:45:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:45:01.458 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f28410bcfb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:45:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:45:01.458 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 11:45:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:45:01.458 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f28410bc920>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:45:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:45:01.458 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 11:45:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:45:01.459 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f28410bc5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:45:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:45:01.459 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 11:45:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:45:01.459 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f28410bc890>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:45:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:45:01.459 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 11:45:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:45:01.459 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f28410be720>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:45:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:45:01.459 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 11:45:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:45:01.459 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f28410bc6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:45:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:45:01.459 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 11:45:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:45:01.459 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f28410bec90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:45:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:45:01.459 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 11:45:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:45:01.460 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f284322b260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:45:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:45:01.460 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 11:45:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:45:01.460 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f28410bc740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:45:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:45:01.460 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 11:45:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:45:01.460 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f28410be780>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:45:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:45:01.460 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 11:45:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:45:01.460 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:45:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:45:01.460 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:45:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:45:01.460 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:45:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:45:01.461 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:45:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:45:01.461 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:45:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:45:01.461 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:45:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:45:01.461 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:45:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:45:01.461 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:45:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:45:01.461 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:45:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:45:01.461 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:45:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:45:01.461 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:45:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:45:01.461 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:45:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:45:01.461 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:45:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:45:01.461 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:45:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:45:01.461 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:45:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:45:01.461 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:45:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:45:01.461 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:45:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:45:01.462 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:45:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:45:01.462 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:45:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:45:01.462 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:45:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:45:01.462 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:45:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:45:01.462 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:45:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:45:01.462 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:45:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:45:01.462 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:45:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:45:01.462 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:45:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:45:01.462 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:45:04 compute-0 nova_compute[185173]: 2026-01-23 11:45:04.234 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:45:05 compute-0 nova_compute[185173]: 2026-01-23 11:45:05.235 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:45:05 compute-0 nova_compute[185173]: 2026-01-23 11:45:05.235 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:45:05 compute-0 nova_compute[185173]: 2026-01-23 11:45:05.235 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:45:05 compute-0 nova_compute[185173]: 2026-01-23 11:45:05.277 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 11:45:05 compute-0 nova_compute[185173]: 2026-01-23 11:45:05.278 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 11:45:05 compute-0 nova_compute[185173]: 2026-01-23 11:45:05.278 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 11:45:05 compute-0 nova_compute[185173]: 2026-01-23 11:45:05.278 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 23 11:45:05 compute-0 nova_compute[185173]: 2026-01-23 11:45:05.600 185177 WARNING nova.virt.libvirt.driver [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 23 11:45:05 compute-0 nova_compute[185173]: 2026-01-23 11:45:05.601 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5745MB free_disk=72.47674179077148GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 23 11:45:05 compute-0 nova_compute[185173]: 2026-01-23 11:45:05.601 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 11:45:05 compute-0 nova_compute[185173]: 2026-01-23 11:45:05.602 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 11:45:05 compute-0 nova_compute[185173]: 2026-01-23 11:45:05.675 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 23 11:45:05 compute-0 nova_compute[185173]: 2026-01-23 11:45:05.675 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 23 11:45:05 compute-0 nova_compute[185173]: 2026-01-23 11:45:05.715 185177 DEBUG nova.compute.provider_tree [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Inventory has not changed in ProviderTree for provider: 77dd020c-2f5c-40b0-b660-8a95a28aabbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 23 11:45:05 compute-0 nova_compute[185173]: 2026-01-23 11:45:05.729 185177 DEBUG nova.scheduler.client.report [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Inventory has not changed for provider 77dd020c-2f5c-40b0-b660-8a95a28aabbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 23 11:45:05 compute-0 nova_compute[185173]: 2026-01-23 11:45:05.730 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 23 11:45:05 compute-0 nova_compute[185173]: 2026-01-23 11:45:05.731 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.129s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 11:45:06 compute-0 nova_compute[185173]: 2026-01-23 11:45:06.727 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:45:06 compute-0 nova_compute[185173]: 2026-01-23 11:45:06.727 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:45:07 compute-0 nova_compute[185173]: 2026-01-23 11:45:07.235 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:45:07 compute-0 podman[237473]: 2026-01-23 11:45:07.75097799 +0000 UTC m=+0.076201278 container health_status 48bfd3e93cfb033a8917f154ab637a84f3f60f7609564292c230ce848bae7693 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 23 11:45:07 compute-0 podman[237474]: 2026-01-23 11:45:07.755921484 +0000 UTC m=+0.070283770 container health_status 6ec039018dddd109dd56b3f3912ce4a80c166b5fb98c417c5e3cfbbdfbfbeaad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260120, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, maintainer=OpenStack Kubernetes Operator team)
Jan 23 11:45:08 compute-0 nova_compute[185173]: 2026-01-23 11:45:08.235 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:45:08 compute-0 nova_compute[185173]: 2026-01-23 11:45:08.235 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 23 11:45:08 compute-0 nova_compute[185173]: 2026-01-23 11:45:08.236 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 23 11:45:08 compute-0 nova_compute[185173]: 2026-01-23 11:45:08.258 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 23 11:45:08 compute-0 nova_compute[185173]: 2026-01-23 11:45:08.259 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:45:08 compute-0 nova_compute[185173]: 2026-01-23 11:45:08.259 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 23 11:45:09 compute-0 podman[237515]: 2026-01-23 11:45:09.753549932 +0000 UTC m=+0.086041463 container health_status d96827cd9c29e53bbdf4cef10942608e4ba405294733072b4aa624c0238e2ed8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Jan 23 11:45:12 compute-0 podman[237533]: 2026-01-23 11:45:12.790470163 +0000 UTC m=+0.119424619 container health_status 1cc877fed4914980324cf4c0d6ba23743fd113442cee4d49cc1a59e402757170 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Jan 23 11:45:18 compute-0 podman[237558]: 2026-01-23 11:45:18.760814217 +0000 UTC m=+0.097846899 container health_status adf529ba1b6aae11f18bcfacdd7f5850af0b6e6af2250d4a705be9c346f3f5af (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 23 11:45:20 compute-0 podman[237579]: 2026-01-23 11:45:20.752871258 +0000 UTC m=+0.084525136 container health_status 900ef841977ab427bb05b895d10e0cac749b9185cccc7bb7aaf2b3886aa6449a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9, config_id=kepler, io.buildah.version=1.29.0, container_name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., release-0.7.12=, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.expose-services=, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 23 11:45:24 compute-0 podman[237599]: 2026-01-23 11:45:24.728533385 +0000 UTC m=+0.055881799 container health_status 99ee297e6e25b500e7af118e58bbafc761d2fd7202cdfcf4c976c2a99866b5ef (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 23 11:45:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:45:29.087 106832 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 11:45:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:45:29.087 106832 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 11:45:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:45:29.087 106832 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 11:45:29 compute-0 podman[201022]: time="2026-01-23T11:45:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 23 11:45:29 compute-0 podman[201022]: @ - - [23/Jan/2026:11:45:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 23 11:45:29 compute-0 podman[201022]: @ - - [23/Jan/2026:11:45:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3842 "" "Go-http-client/1.1"
Jan 23 11:45:29 compute-0 podman[237625]: 2026-01-23 11:45:29.76762895 +0000 UTC m=+0.094846944 container health_status cde20f10ae383cce1365a41265bac0a75ea71c31a21a1539f187bef9d678e8d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, managed_by=edpm_ansible, io.openshift.expose-services=, vcs-type=git, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, com.redhat.component=ubi9-minimal-container, release=1755695350, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, config_id=openstack_network_exporter, maintainer=Red Hat, Inc., distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 23 11:45:31 compute-0 openstack_network_exporter[204160]: ERROR   11:45:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 23 11:45:31 compute-0 openstack_network_exporter[204160]: 
Jan 23 11:45:31 compute-0 openstack_network_exporter[204160]: ERROR   11:45:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 23 11:45:31 compute-0 openstack_network_exporter[204160]: 
Jan 23 11:45:38 compute-0 podman[237646]: 2026-01-23 11:45:38.732413602 +0000 UTC m=+0.056045553 container health_status 48bfd3e93cfb033a8917f154ab637a84f3f60f7609564292c230ce848bae7693 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Jan 23 11:45:38 compute-0 podman[237647]: 2026-01-23 11:45:38.781343196 +0000 UTC m=+0.100547536 container health_status 6ec039018dddd109dd56b3f3912ce4a80c166b5fb98c417c5e3cfbbdfbfbeaad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260120, org.label-schema.license=GPLv2)
Jan 23 11:45:40 compute-0 podman[237687]: 2026-01-23 11:45:40.740301168 +0000 UTC m=+0.072277720 container health_status d96827cd9c29e53bbdf4cef10942608e4ba405294733072b4aa624c0238e2ed8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 23 11:45:43 compute-0 podman[237707]: 2026-01-23 11:45:43.763312361 +0000 UTC m=+0.098707840 container health_status 1cc877fed4914980324cf4c0d6ba23743fd113442cee4d49cc1a59e402757170 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 23 11:45:47 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:45:47.789 106832 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:21:44', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '86:2e:09:c4:2a:53'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 23 11:45:47 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:45:47.790 106832 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 23 11:45:47 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:45:47.790 106832 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9a136bfd-345f-428f-a7f6-d55531120214, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 23 11:45:49 compute-0 podman[237733]: 2026-01-23 11:45:49.751768107 +0000 UTC m=+0.087367246 container health_status adf529ba1b6aae11f18bcfacdd7f5850af0b6e6af2250d4a705be9c346f3f5af (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 23 11:45:51 compute-0 podman[237753]: 2026-01-23 11:45:51.772563396 +0000 UTC m=+0.103968982 container health_status 900ef841977ab427bb05b895d10e0cac749b9185cccc7bb7aaf2b3886aa6449a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., com.redhat.component=ubi9-container, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, version=9.4, distribution-scope=public, io.openshift.tags=base rhel9, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, config_id=kepler, io.openshift.expose-services=, release=1214.1726694543, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, name=ubi9)
Jan 23 11:45:55 compute-0 podman[237771]: 2026-01-23 11:45:55.723706171 +0000 UTC m=+0.059057709 container health_status 99ee297e6e25b500e7af118e58bbafc761d2fd7202cdfcf4c976c2a99866b5ef (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 23 11:45:59 compute-0 podman[201022]: time="2026-01-23T11:45:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 23 11:45:59 compute-0 podman[201022]: @ - - [23/Jan/2026:11:45:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 23 11:45:59 compute-0 podman[201022]: @ - - [23/Jan/2026:11:45:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3851 "" "Go-http-client/1.1"
Jan 23 11:46:00 compute-0 podman[237795]: 2026-01-23 11:46:00.727910672 +0000 UTC m=+0.062661448 container health_status cde20f10ae383cce1365a41265bac0a75ea71c31a21a1539f187bef9d678e8d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, com.redhat.component=ubi9-minimal-container, config_id=openstack_network_exporter, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., managed_by=edpm_ansible, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc.)
Jan 23 11:46:01 compute-0 openstack_network_exporter[204160]: ERROR   11:46:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 23 11:46:01 compute-0 openstack_network_exporter[204160]: 
Jan 23 11:46:01 compute-0 openstack_network_exporter[204160]: ERROR   11:46:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 23 11:46:01 compute-0 openstack_network_exporter[204160]: 
Jan 23 11:46:05 compute-0 nova_compute[185173]: 2026-01-23 11:46:05.235 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:46:05 compute-0 nova_compute[185173]: 2026-01-23 11:46:05.236 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:46:05 compute-0 nova_compute[185173]: 2026-01-23 11:46:05.237 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:46:06 compute-0 nova_compute[185173]: 2026-01-23 11:46:06.235 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:46:06 compute-0 nova_compute[185173]: 2026-01-23 11:46:06.235 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:46:06 compute-0 nova_compute[185173]: 2026-01-23 11:46:06.272 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 11:46:06 compute-0 nova_compute[185173]: 2026-01-23 11:46:06.272 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 11:46:06 compute-0 nova_compute[185173]: 2026-01-23 11:46:06.272 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 11:46:06 compute-0 nova_compute[185173]: 2026-01-23 11:46:06.272 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 23 11:46:06 compute-0 nova_compute[185173]: 2026-01-23 11:46:06.615 185177 WARNING nova.virt.libvirt.driver [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 23 11:46:06 compute-0 nova_compute[185173]: 2026-01-23 11:46:06.616 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5741MB free_disk=72.47676086425781GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 23 11:46:06 compute-0 nova_compute[185173]: 2026-01-23 11:46:06.617 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 11:46:06 compute-0 nova_compute[185173]: 2026-01-23 11:46:06.618 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 11:46:06 compute-0 nova_compute[185173]: 2026-01-23 11:46:06.679 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 23 11:46:06 compute-0 nova_compute[185173]: 2026-01-23 11:46:06.680 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 23 11:46:06 compute-0 nova_compute[185173]: 2026-01-23 11:46:06.703 185177 DEBUG nova.compute.provider_tree [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Inventory has not changed in ProviderTree for provider: 77dd020c-2f5c-40b0-b660-8a95a28aabbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 23 11:46:06 compute-0 nova_compute[185173]: 2026-01-23 11:46:06.716 185177 DEBUG nova.scheduler.client.report [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Inventory has not changed for provider 77dd020c-2f5c-40b0-b660-8a95a28aabbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 23 11:46:06 compute-0 nova_compute[185173]: 2026-01-23 11:46:06.717 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 23 11:46:06 compute-0 nova_compute[185173]: 2026-01-23 11:46:06.717 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.100s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 11:46:07 compute-0 nova_compute[185173]: 2026-01-23 11:46:07.713 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:46:08 compute-0 nova_compute[185173]: 2026-01-23 11:46:08.230 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:46:09 compute-0 nova_compute[185173]: 2026-01-23 11:46:09.234 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:46:09 compute-0 nova_compute[185173]: 2026-01-23 11:46:09.235 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 23 11:46:09 compute-0 nova_compute[185173]: 2026-01-23 11:46:09.235 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 23 11:46:09 compute-0 nova_compute[185173]: 2026-01-23 11:46:09.248 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 23 11:46:09 compute-0 nova_compute[185173]: 2026-01-23 11:46:09.249 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:46:09 compute-0 nova_compute[185173]: 2026-01-23 11:46:09.249 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:46:09 compute-0 nova_compute[185173]: 2026-01-23 11:46:09.249 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 23 11:46:09 compute-0 podman[237815]: 2026-01-23 11:46:09.74405845 +0000 UTC m=+0.080740671 container health_status 48bfd3e93cfb033a8917f154ab637a84f3f60f7609564292c230ce848bae7693 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Jan 23 11:46:09 compute-0 podman[237816]: 2026-01-23 11:46:09.75446828 +0000 UTC m=+0.087361416 container health_status 6ec039018dddd109dd56b3f3912ce4a80c166b5fb98c417c5e3cfbbdfbfbeaad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20260120, tcib_managed=true, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 23 11:46:11 compute-0 podman[237856]: 2026-01-23 11:46:11.727496123 +0000 UTC m=+0.058595565 container health_status d96827cd9c29e53bbdf4cef10942608e4ba405294733072b4aa624c0238e2ed8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 23 11:46:14 compute-0 podman[237876]: 2026-01-23 11:46:14.76683327 +0000 UTC m=+0.100886846 container health_status 1cc877fed4914980324cf4c0d6ba23743fd113442cee4d49cc1a59e402757170 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 23 11:46:20 compute-0 podman[237903]: 2026-01-23 11:46:20.772685281 +0000 UTC m=+0.086206316 container health_status adf529ba1b6aae11f18bcfacdd7f5850af0b6e6af2250d4a705be9c346f3f5af (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_ipmi, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 23 11:46:22 compute-0 podman[237924]: 2026-01-23 11:46:22.737306825 +0000 UTC m=+0.071331208 container health_status 900ef841977ab427bb05b895d10e0cac749b9185cccc7bb7aaf2b3886aa6449a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public, architecture=x86_64, vcs-type=git, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., managed_by=edpm_ansible, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, config_id=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, version=9.4, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.buildah.version=1.29.0, io.openshift.tags=base rhel9)
Jan 23 11:46:26 compute-0 podman[237944]: 2026-01-23 11:46:26.722659264 +0000 UTC m=+0.059413064 container health_status 99ee297e6e25b500e7af118e58bbafc761d2fd7202cdfcf4c976c2a99866b5ef (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 23 11:46:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:46:29.089 106832 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 11:46:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:46:29.091 106832 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 11:46:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:46:29.091 106832 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 11:46:29 compute-0 podman[201022]: time="2026-01-23T11:46:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 23 11:46:29 compute-0 podman[201022]: @ - - [23/Jan/2026:11:46:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 23 11:46:29 compute-0 podman[201022]: @ - - [23/Jan/2026:11:46:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3847 "" "Go-http-client/1.1"
Jan 23 11:46:31 compute-0 openstack_network_exporter[204160]: ERROR   11:46:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 23 11:46:31 compute-0 openstack_network_exporter[204160]: 
Jan 23 11:46:31 compute-0 openstack_network_exporter[204160]: ERROR   11:46:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 23 11:46:31 compute-0 openstack_network_exporter[204160]: 
Jan 23 11:46:31 compute-0 podman[237968]: 2026-01-23 11:46:31.764056779 +0000 UTC m=+0.096463948 container health_status cde20f10ae383cce1365a41265bac0a75ea71c31a21a1539f187bef9d678e8d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, config_id=openstack_network_exporter, io.openshift.expose-services=, architecture=x86_64, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, release=1755695350, managed_by=edpm_ansible, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc.)
Jan 23 11:46:40 compute-0 podman[237989]: 2026-01-23 11:46:40.758054277 +0000 UTC m=+0.072724043 container health_status 6ec039018dddd109dd56b3f3912ce4a80c166b5fb98c417c5e3cfbbdfbfbeaad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260120, config_id=ceilometer_agent_compute)
Jan 23 11:46:40 compute-0 podman[237988]: 2026-01-23 11:46:40.77155178 +0000 UTC m=+0.105934132 container health_status 48bfd3e93cfb033a8917f154ab637a84f3f60f7609564292c230ce848bae7693 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 23 11:46:41 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:46:41.530 106832 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:21:44', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '86:2e:09:c4:2a:53'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 23 11:46:41 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:46:41.530 106832 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 23 11:46:42 compute-0 podman[238031]: 2026-01-23 11:46:42.764172413 +0000 UTC m=+0.094044159 container health_status d96827cd9c29e53bbdf4cef10942608e4ba405294733072b4aa624c0238e2ed8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 11:46:45 compute-0 podman[238050]: 2026-01-23 11:46:45.764245233 +0000 UTC m=+0.096361216 container health_status 1cc877fed4914980324cf4c0d6ba23743fd113442cee4d49cc1a59e402757170 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 11:46:46 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:46:46.532 106832 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9a136bfd-345f-428f-a7f6-d55531120214, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 23 11:46:51 compute-0 podman[238077]: 2026-01-23 11:46:51.794189037 +0000 UTC m=+0.110273897 container health_status adf529ba1b6aae11f18bcfacdd7f5850af0b6e6af2250d4a705be9c346f3f5af (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, tcib_managed=true, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Jan 23 11:46:53 compute-0 podman[238096]: 2026-01-23 11:46:53.808842774 +0000 UTC m=+0.128148830 container health_status 900ef841977ab427bb05b895d10e0cac749b9185cccc7bb7aaf2b3886aa6449a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., container_name=kepler, release-0.7.12=, name=ubi9, version=9.4, io.openshift.expose-services=, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, config_id=kepler, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, vcs-type=git, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public)
Jan 23 11:46:54 compute-0 nova_compute[185173]: 2026-01-23 11:46:54.205 185177 DEBUG oslo_concurrency.lockutils [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Acquiring lock "55846fbf-a87a-4cba-be0b-23125d3d9ef4" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 11:46:54 compute-0 nova_compute[185173]: 2026-01-23 11:46:54.206 185177 DEBUG oslo_concurrency.lockutils [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Lock "55846fbf-a87a-4cba-be0b-23125d3d9ef4" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 11:46:54 compute-0 nova_compute[185173]: 2026-01-23 11:46:54.239 185177 DEBUG nova.compute.manager [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: 55846fbf-a87a-4cba-be0b-23125d3d9ef4] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 23 11:46:54 compute-0 nova_compute[185173]: 2026-01-23 11:46:54.370 185177 DEBUG oslo_concurrency.lockutils [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 11:46:54 compute-0 nova_compute[185173]: 2026-01-23 11:46:54.371 185177 DEBUG oslo_concurrency.lockutils [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 11:46:54 compute-0 nova_compute[185173]: 2026-01-23 11:46:54.385 185177 DEBUG nova.virt.hardware [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 23 11:46:54 compute-0 nova_compute[185173]: 2026-01-23 11:46:54.385 185177 INFO nova.compute.claims [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: 55846fbf-a87a-4cba-be0b-23125d3d9ef4] Claim successful on node compute-0.ctlplane.example.com
Jan 23 11:46:54 compute-0 nova_compute[185173]: 2026-01-23 11:46:54.585 185177 DEBUG nova.compute.provider_tree [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Inventory has not changed in ProviderTree for provider: 77dd020c-2f5c-40b0-b660-8a95a28aabbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 23 11:46:54 compute-0 nova_compute[185173]: 2026-01-23 11:46:54.598 185177 DEBUG nova.scheduler.client.report [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Inventory has not changed for provider 77dd020c-2f5c-40b0-b660-8a95a28aabbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 23 11:46:54 compute-0 nova_compute[185173]: 2026-01-23 11:46:54.621 185177 DEBUG oslo_concurrency.lockutils [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.250s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 11:46:54 compute-0 nova_compute[185173]: 2026-01-23 11:46:54.622 185177 DEBUG nova.compute.manager [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: 55846fbf-a87a-4cba-be0b-23125d3d9ef4] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 23 11:46:54 compute-0 nova_compute[185173]: 2026-01-23 11:46:54.670 185177 DEBUG nova.compute.manager [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: 55846fbf-a87a-4cba-be0b-23125d3d9ef4] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 23 11:46:54 compute-0 nova_compute[185173]: 2026-01-23 11:46:54.671 185177 DEBUG nova.network.neutron [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: 55846fbf-a87a-4cba-be0b-23125d3d9ef4] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 23 11:46:54 compute-0 nova_compute[185173]: 2026-01-23 11:46:54.695 185177 INFO nova.virt.libvirt.driver [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: 55846fbf-a87a-4cba-be0b-23125d3d9ef4] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 23 11:46:54 compute-0 nova_compute[185173]: 2026-01-23 11:46:54.728 185177 DEBUG nova.compute.manager [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: 55846fbf-a87a-4cba-be0b-23125d3d9ef4] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 23 11:46:54 compute-0 nova_compute[185173]: 2026-01-23 11:46:54.998 185177 DEBUG nova.compute.manager [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: 55846fbf-a87a-4cba-be0b-23125d3d9ef4] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 23 11:46:55 compute-0 nova_compute[185173]: 2026-01-23 11:46:55.000 185177 DEBUG nova.virt.libvirt.driver [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: 55846fbf-a87a-4cba-be0b-23125d3d9ef4] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 23 11:46:55 compute-0 nova_compute[185173]: 2026-01-23 11:46:55.001 185177 INFO nova.virt.libvirt.driver [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: 55846fbf-a87a-4cba-be0b-23125d3d9ef4] Creating image(s)
Jan 23 11:46:55 compute-0 nova_compute[185173]: 2026-01-23 11:46:55.002 185177 DEBUG oslo_concurrency.lockutils [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Acquiring lock "/var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 11:46:55 compute-0 nova_compute[185173]: 2026-01-23 11:46:55.003 185177 DEBUG oslo_concurrency.lockutils [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Lock "/var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 11:46:55 compute-0 nova_compute[185173]: 2026-01-23 11:46:55.004 185177 DEBUG oslo_concurrency.lockutils [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Lock "/var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 11:46:55 compute-0 nova_compute[185173]: 2026-01-23 11:46:55.004 185177 DEBUG oslo_concurrency.lockutils [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Acquiring lock "80c014b261205a8ef2db68f438805c389e810b13" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 11:46:55 compute-0 nova_compute[185173]: 2026-01-23 11:46:55.005 185177 DEBUG oslo_concurrency.lockutils [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Lock "80c014b261205a8ef2db68f438805c389e810b13" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 11:46:55 compute-0 nova_compute[185173]: 2026-01-23 11:46:55.598 185177 WARNING oslo_policy.policy [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
Jan 23 11:46:55 compute-0 nova_compute[185173]: 2026-01-23 11:46:55.599 185177 WARNING oslo_policy.policy [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
Jan 23 11:46:56 compute-0 nova_compute[185173]: 2026-01-23 11:46:56.287 185177 DEBUG oslo_concurrency.processutils [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/80c014b261205a8ef2db68f438805c389e810b13.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:46:56 compute-0 nova_compute[185173]: 2026-01-23 11:46:56.341 185177 DEBUG oslo_concurrency.processutils [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/80c014b261205a8ef2db68f438805c389e810b13.part --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:46:56 compute-0 nova_compute[185173]: 2026-01-23 11:46:56.343 185177 DEBUG nova.virt.images [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] c5833e41-b4db-454e-8f49-014aa18c7dc5 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Jan 23 11:46:56 compute-0 nova_compute[185173]: 2026-01-23 11:46:56.344 185177 DEBUG nova.privsep.utils [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Jan 23 11:46:56 compute-0 nova_compute[185173]: 2026-01-23 11:46:56.345 185177 DEBUG oslo_concurrency.processutils [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/80c014b261205a8ef2db68f438805c389e810b13.part /var/lib/nova/instances/_base/80c014b261205a8ef2db68f438805c389e810b13.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:46:56 compute-0 nova_compute[185173]: 2026-01-23 11:46:56.572 185177 DEBUG oslo_concurrency.processutils [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/80c014b261205a8ef2db68f438805c389e810b13.part /var/lib/nova/instances/_base/80c014b261205a8ef2db68f438805c389e810b13.converted" returned: 0 in 0.227s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:46:56 compute-0 nova_compute[185173]: 2026-01-23 11:46:56.576 185177 DEBUG oslo_concurrency.processutils [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/80c014b261205a8ef2db68f438805c389e810b13.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:46:56 compute-0 nova_compute[185173]: 2026-01-23 11:46:56.634 185177 DEBUG oslo_concurrency.processutils [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/80c014b261205a8ef2db68f438805c389e810b13.converted --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:46:56 compute-0 nova_compute[185173]: 2026-01-23 11:46:56.635 185177 DEBUG oslo_concurrency.lockutils [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Lock "80c014b261205a8ef2db68f438805c389e810b13" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 1.629s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 11:46:56 compute-0 nova_compute[185173]: 2026-01-23 11:46:56.649 185177 INFO oslo.privsep.daemon [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'nova.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmpra07hguq/privsep.sock']
Jan 23 11:46:56 compute-0 nova_compute[185173]: 2026-01-23 11:46:56.774 185177 DEBUG nova.network.neutron [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: 55846fbf-a87a-4cba-be0b-23125d3d9ef4] Successfully created port: 4c18896b-ecf0-4d1b-b901-f24edce45c11 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 23 11:46:57 compute-0 nova_compute[185173]: 2026-01-23 11:46:57.314 185177 INFO oslo.privsep.daemon [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Spawned new privsep daemon via rootwrap
Jan 23 11:46:57 compute-0 nova_compute[185173]: 2026-01-23 11:46:57.184 238129 INFO oslo.privsep.daemon [-] privsep daemon starting
Jan 23 11:46:57 compute-0 nova_compute[185173]: 2026-01-23 11:46:57.188 238129 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Jan 23 11:46:57 compute-0 nova_compute[185173]: 2026-01-23 11:46:57.191 238129 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Jan 23 11:46:57 compute-0 nova_compute[185173]: 2026-01-23 11:46:57.191 238129 INFO oslo.privsep.daemon [-] privsep daemon running as pid 238129
Jan 23 11:46:57 compute-0 nova_compute[185173]: 2026-01-23 11:46:57.397 185177 DEBUG oslo_concurrency.processutils [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/80c014b261205a8ef2db68f438805c389e810b13 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:46:57 compute-0 nova_compute[185173]: 2026-01-23 11:46:57.450 185177 DEBUG oslo_concurrency.processutils [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/80c014b261205a8ef2db68f438805c389e810b13 --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:46:57 compute-0 nova_compute[185173]: 2026-01-23 11:46:57.451 185177 DEBUG oslo_concurrency.lockutils [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Acquiring lock "80c014b261205a8ef2db68f438805c389e810b13" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 11:46:57 compute-0 nova_compute[185173]: 2026-01-23 11:46:57.452 185177 DEBUG oslo_concurrency.lockutils [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Lock "80c014b261205a8ef2db68f438805c389e810b13" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 11:46:57 compute-0 nova_compute[185173]: 2026-01-23 11:46:57.463 185177 DEBUG oslo_concurrency.processutils [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/80c014b261205a8ef2db68f438805c389e810b13 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:46:57 compute-0 nova_compute[185173]: 2026-01-23 11:46:57.522 185177 DEBUG oslo_concurrency.processutils [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/80c014b261205a8ef2db68f438805c389e810b13 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:46:57 compute-0 nova_compute[185173]: 2026-01-23 11:46:57.524 185177 DEBUG oslo_concurrency.processutils [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/80c014b261205a8ef2db68f438805c389e810b13,backing_fmt=raw /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:46:57 compute-0 nova_compute[185173]: 2026-01-23 11:46:57.561 185177 DEBUG oslo_concurrency.processutils [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/80c014b261205a8ef2db68f438805c389e810b13,backing_fmt=raw /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk 1073741824" returned: 0 in 0.037s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:46:57 compute-0 nova_compute[185173]: 2026-01-23 11:46:57.562 185177 DEBUG oslo_concurrency.lockutils [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Lock "80c014b261205a8ef2db68f438805c389e810b13" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.110s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 11:46:57 compute-0 nova_compute[185173]: 2026-01-23 11:46:57.562 185177 DEBUG oslo_concurrency.processutils [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/80c014b261205a8ef2db68f438805c389e810b13 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:46:57 compute-0 nova_compute[185173]: 2026-01-23 11:46:57.620 185177 DEBUG oslo_concurrency.processutils [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/80c014b261205a8ef2db68f438805c389e810b13 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:46:57 compute-0 nova_compute[185173]: 2026-01-23 11:46:57.621 185177 DEBUG nova.virt.disk.api [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Checking if we can resize image /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Jan 23 11:46:57 compute-0 nova_compute[185173]: 2026-01-23 11:46:57.622 185177 DEBUG oslo_concurrency.processutils [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:46:57 compute-0 nova_compute[185173]: 2026-01-23 11:46:57.699 185177 DEBUG oslo_concurrency.processutils [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:46:57 compute-0 nova_compute[185173]: 2026-01-23 11:46:57.700 185177 DEBUG nova.virt.disk.api [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Cannot resize image /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Jan 23 11:46:57 compute-0 nova_compute[185173]: 2026-01-23 11:46:57.700 185177 DEBUG nova.objects.instance [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Lazy-loading 'migration_context' on Instance uuid 55846fbf-a87a-4cba-be0b-23125d3d9ef4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 23 11:46:57 compute-0 podman[238144]: 2026-01-23 11:46:57.720169459 +0000 UTC m=+0.052159176 container health_status 99ee297e6e25b500e7af118e58bbafc761d2fd7202cdfcf4c976c2a99866b5ef (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 23 11:46:57 compute-0 nova_compute[185173]: 2026-01-23 11:46:57.720 185177 DEBUG oslo_concurrency.lockutils [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Acquiring lock "/var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 11:46:57 compute-0 nova_compute[185173]: 2026-01-23 11:46:57.720 185177 DEBUG oslo_concurrency.lockutils [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Lock "/var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 11:46:57 compute-0 nova_compute[185173]: 2026-01-23 11:46:57.721 185177 DEBUG oslo_concurrency.lockutils [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Lock "/var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 11:46:57 compute-0 nova_compute[185173]: 2026-01-23 11:46:57.721 185177 DEBUG oslo_concurrency.lockutils [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 11:46:57 compute-0 nova_compute[185173]: 2026-01-23 11:46:57.721 185177 DEBUG oslo_concurrency.lockutils [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 11:46:57 compute-0 nova_compute[185173]: 2026-01-23 11:46:57.722 185177 DEBUG oslo_concurrency.processutils [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/_base/ephemeral_1_0706d66 1G execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:46:57 compute-0 nova_compute[185173]: 2026-01-23 11:46:57.744 185177 DEBUG oslo_concurrency.processutils [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/_base/ephemeral_1_0706d66 1G" returned: 0 in 0.022s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:46:57 compute-0 nova_compute[185173]: 2026-01-23 11:46:57.745 185177 DEBUG oslo_concurrency.processutils [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Running cmd (subprocess): mkfs -t vfat -n ephemeral0 /var/lib/nova/instances/_base/ephemeral_1_0706d66 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:46:57 compute-0 nova_compute[185173]: 2026-01-23 11:46:57.882 185177 DEBUG oslo_concurrency.processutils [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] CMD "mkfs -t vfat -n ephemeral0 /var/lib/nova/instances/_base/ephemeral_1_0706d66" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:46:57 compute-0 nova_compute[185173]: 2026-01-23 11:46:57.883 185177 DEBUG oslo_concurrency.lockutils [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.161s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 11:46:57 compute-0 nova_compute[185173]: 2026-01-23 11:46:57.895 185177 DEBUG oslo_concurrency.processutils [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:46:57 compute-0 nova_compute[185173]: 2026-01-23 11:46:57.978 185177 DEBUG oslo_concurrency.processutils [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:46:57 compute-0 nova_compute[185173]: 2026-01-23 11:46:57.979 185177 DEBUG oslo_concurrency.lockutils [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 11:46:57 compute-0 nova_compute[185173]: 2026-01-23 11:46:57.980 185177 DEBUG oslo_concurrency.lockutils [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 11:46:57 compute-0 nova_compute[185173]: 2026-01-23 11:46:57.991 185177 DEBUG oslo_concurrency.processutils [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:46:58 compute-0 nova_compute[185173]: 2026-01-23 11:46:58.042 185177 DEBUG oslo_concurrency.processutils [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.051s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:46:58 compute-0 nova_compute[185173]: 2026-01-23 11:46:58.043 185177 DEBUG oslo_concurrency.processutils [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.eph0 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:46:58 compute-0 nova_compute[185173]: 2026-01-23 11:46:58.158 185177 DEBUG nova.network.neutron [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: 55846fbf-a87a-4cba-be0b-23125d3d9ef4] Successfully updated port: 4c18896b-ecf0-4d1b-b901-f24edce45c11 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 23 11:46:58 compute-0 nova_compute[185173]: 2026-01-23 11:46:58.160 185177 DEBUG oslo_concurrency.processutils [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.eph0 1073741824" returned: 0 in 0.117s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:46:58 compute-0 nova_compute[185173]: 2026-01-23 11:46:58.160 185177 DEBUG oslo_concurrency.lockutils [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.180s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 11:46:58 compute-0 nova_compute[185173]: 2026-01-23 11:46:58.161 185177 DEBUG oslo_concurrency.processutils [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:46:58 compute-0 nova_compute[185173]: 2026-01-23 11:46:58.183 185177 DEBUG oslo_concurrency.lockutils [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Acquiring lock "refresh_cache-55846fbf-a87a-4cba-be0b-23125d3d9ef4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 23 11:46:58 compute-0 nova_compute[185173]: 2026-01-23 11:46:58.183 185177 DEBUG oslo_concurrency.lockutils [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Acquired lock "refresh_cache-55846fbf-a87a-4cba-be0b-23125d3d9ef4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 23 11:46:58 compute-0 nova_compute[185173]: 2026-01-23 11:46:58.183 185177 DEBUG nova.network.neutron [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: 55846fbf-a87a-4cba-be0b-23125d3d9ef4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 23 11:46:58 compute-0 nova_compute[185173]: 2026-01-23 11:46:58.219 185177 DEBUG oslo_concurrency.processutils [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:46:58 compute-0 nova_compute[185173]: 2026-01-23 11:46:58.220 185177 DEBUG nova.virt.libvirt.driver [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: 55846fbf-a87a-4cba-be0b-23125d3d9ef4] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 23 11:46:58 compute-0 nova_compute[185173]: 2026-01-23 11:46:58.220 185177 DEBUG nova.virt.libvirt.driver [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: 55846fbf-a87a-4cba-be0b-23125d3d9ef4] Ensure instance console log exists: /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 23 11:46:58 compute-0 nova_compute[185173]: 2026-01-23 11:46:58.221 185177 DEBUG oslo_concurrency.lockutils [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 11:46:58 compute-0 nova_compute[185173]: 2026-01-23 11:46:58.221 185177 DEBUG oslo_concurrency.lockutils [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 11:46:58 compute-0 nova_compute[185173]: 2026-01-23 11:46:58.221 185177 DEBUG oslo_concurrency.lockutils [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 11:46:58 compute-0 nova_compute[185173]: 2026-01-23 11:46:58.625 185177 DEBUG nova.network.neutron [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: 55846fbf-a87a-4cba-be0b-23125d3d9ef4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 23 11:46:58 compute-0 nova_compute[185173]: 2026-01-23 11:46:58.790 185177 DEBUG nova.compute.manager [req-4dca4da2-4f4c-4b1e-b166-992efcbfc040 req-cb42442f-8ec7-45ae-ac72-0b4c3a371698 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] [instance: 55846fbf-a87a-4cba-be0b-23125d3d9ef4] Received event network-changed-4c18896b-ecf0-4d1b-b901-f24edce45c11 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 23 11:46:58 compute-0 nova_compute[185173]: 2026-01-23 11:46:58.791 185177 DEBUG nova.compute.manager [req-4dca4da2-4f4c-4b1e-b166-992efcbfc040 req-cb42442f-8ec7-45ae-ac72-0b4c3a371698 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] [instance: 55846fbf-a87a-4cba-be0b-23125d3d9ef4] Refreshing instance network info cache due to event network-changed-4c18896b-ecf0-4d1b-b901-f24edce45c11. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 23 11:46:58 compute-0 nova_compute[185173]: 2026-01-23 11:46:58.791 185177 DEBUG oslo_concurrency.lockutils [req-4dca4da2-4f4c-4b1e-b166-992efcbfc040 req-cb42442f-8ec7-45ae-ac72-0b4c3a371698 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] Acquiring lock "refresh_cache-55846fbf-a87a-4cba-be0b-23125d3d9ef4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 23 11:46:59 compute-0 nova_compute[185173]: 2026-01-23 11:46:59.564 185177 DEBUG nova.network.neutron [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: 55846fbf-a87a-4cba-be0b-23125d3d9ef4] Updating instance_info_cache with network_info: [{"id": "4c18896b-ecf0-4d1b-b901-f24edce45c11", "address": "fa:16:3e:e4:21:a1", "network": {"id": "9d2c33ef-0f52-43b5-80dd-899657aece53", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.65", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bd16a0de2f5e4a8480a855ef0e1a3f14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c18896b-ec", "ovs_interfaceid": "4c18896b-ecf0-4d1b-b901-f24edce45c11", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 23 11:46:59 compute-0 nova_compute[185173]: 2026-01-23 11:46:59.590 185177 DEBUG oslo_concurrency.lockutils [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Releasing lock "refresh_cache-55846fbf-a87a-4cba-be0b-23125d3d9ef4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 23 11:46:59 compute-0 nova_compute[185173]: 2026-01-23 11:46:59.591 185177 DEBUG nova.compute.manager [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: 55846fbf-a87a-4cba-be0b-23125d3d9ef4] Instance network_info: |[{"id": "4c18896b-ecf0-4d1b-b901-f24edce45c11", "address": "fa:16:3e:e4:21:a1", "network": {"id": "9d2c33ef-0f52-43b5-80dd-899657aece53", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.65", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bd16a0de2f5e4a8480a855ef0e1a3f14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c18896b-ec", "ovs_interfaceid": "4c18896b-ecf0-4d1b-b901-f24edce45c11", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 23 11:46:59 compute-0 nova_compute[185173]: 2026-01-23 11:46:59.592 185177 DEBUG oslo_concurrency.lockutils [req-4dca4da2-4f4c-4b1e-b166-992efcbfc040 req-cb42442f-8ec7-45ae-ac72-0b4c3a371698 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] Acquired lock "refresh_cache-55846fbf-a87a-4cba-be0b-23125d3d9ef4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 23 11:46:59 compute-0 nova_compute[185173]: 2026-01-23 11:46:59.592 185177 DEBUG nova.network.neutron [req-4dca4da2-4f4c-4b1e-b166-992efcbfc040 req-cb42442f-8ec7-45ae-ac72-0b4c3a371698 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] [instance: 55846fbf-a87a-4cba-be0b-23125d3d9ef4] Refreshing network info cache for port 4c18896b-ecf0-4d1b-b901-f24edce45c11 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 23 11:46:59 compute-0 nova_compute[185173]: 2026-01-23 11:46:59.599 185177 DEBUG nova.virt.libvirt.driver [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: 55846fbf-a87a-4cba-be0b-23125d3d9ef4] Start _get_guest_xml network_info=[{"id": "4c18896b-ecf0-4d1b-b901-f24edce45c11", "address": "fa:16:3e:e4:21:a1", "network": {"id": "9d2c33ef-0f52-43b5-80dd-899657aece53", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.65", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bd16a0de2f5e4a8480a855ef0e1a3f14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c18896b-ec", "ovs_interfaceid": "4c18896b-ecf0-4d1b-b901-f24edce45c11", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2026-01-23T11:45:38Z,direct_url=<?>,disk_format='qcow2',id=c5833e41-b4db-454e-8f49-014aa18c7dc5,min_disk=0,min_ram=0,name='cirros',owner='bd16a0de2f5e4a8480a855ef0e1a3f14',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2026-01-23T11:45:39Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'disk_bus': 'virtio', 'encrypted': False, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'size': 0, 'encryption_options': None, 'encryption_secret_uuid': None, 'boot_index': 0, 'image_id': 'c5833e41-b4db-454e-8f49-014aa18c7dc5'}], 'ephemerals': [{'encryption_secret_uuid': None, 'encryption_format': None, 'disk_bus': 'virtio', 'encrypted': False, 'device_type': 'disk', 'device_name': '/dev/vdb', 'size': 1, 'encryption_options': None, 'guest_format': None}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 23 11:46:59 compute-0 nova_compute[185173]: 2026-01-23 11:46:59.610 185177 WARNING nova.virt.libvirt.driver [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 23 11:46:59 compute-0 nova_compute[185173]: 2026-01-23 11:46:59.623 185177 DEBUG nova.virt.libvirt.host [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 23 11:46:59 compute-0 nova_compute[185173]: 2026-01-23 11:46:59.624 185177 DEBUG nova.virt.libvirt.host [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 23 11:46:59 compute-0 nova_compute[185173]: 2026-01-23 11:46:59.631 185177 DEBUG nova.virt.libvirt.host [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 23 11:46:59 compute-0 nova_compute[185173]: 2026-01-23 11:46:59.632 185177 DEBUG nova.virt.libvirt.host [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 23 11:46:59 compute-0 nova_compute[185173]: 2026-01-23 11:46:59.633 185177 DEBUG nova.virt.libvirt.driver [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 23 11:46:59 compute-0 nova_compute[185173]: 2026-01-23 11:46:59.633 185177 DEBUG nova.virt.hardware [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-23T11:45:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='f2c5c5dd-a580-4885-a3ab-a766eac401c8',id=1,is_public=True,memory_mb=512,name='m1.small',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2026-01-23T11:45:38Z,direct_url=<?>,disk_format='qcow2',id=c5833e41-b4db-454e-8f49-014aa18c7dc5,min_disk=0,min_ram=0,name='cirros',owner='bd16a0de2f5e4a8480a855ef0e1a3f14',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2026-01-23T11:45:39Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 23 11:46:59 compute-0 nova_compute[185173]: 2026-01-23 11:46:59.633 185177 DEBUG nova.virt.hardware [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 23 11:46:59 compute-0 nova_compute[185173]: 2026-01-23 11:46:59.634 185177 DEBUG nova.virt.hardware [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 23 11:46:59 compute-0 nova_compute[185173]: 2026-01-23 11:46:59.634 185177 DEBUG nova.virt.hardware [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 23 11:46:59 compute-0 nova_compute[185173]: 2026-01-23 11:46:59.634 185177 DEBUG nova.virt.hardware [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 23 11:46:59 compute-0 nova_compute[185173]: 2026-01-23 11:46:59.634 185177 DEBUG nova.virt.hardware [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 23 11:46:59 compute-0 nova_compute[185173]: 2026-01-23 11:46:59.635 185177 DEBUG nova.virt.hardware [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 23 11:46:59 compute-0 nova_compute[185173]: 2026-01-23 11:46:59.635 185177 DEBUG nova.virt.hardware [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 23 11:46:59 compute-0 nova_compute[185173]: 2026-01-23 11:46:59.635 185177 DEBUG nova.virt.hardware [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 23 11:46:59 compute-0 nova_compute[185173]: 2026-01-23 11:46:59.636 185177 DEBUG nova.virt.hardware [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 23 11:46:59 compute-0 nova_compute[185173]: 2026-01-23 11:46:59.636 185177 DEBUG nova.virt.hardware [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 23 11:46:59 compute-0 nova_compute[185173]: 2026-01-23 11:46:59.641 185177 DEBUG nova.privsep.utils [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Jan 23 11:46:59 compute-0 nova_compute[185173]: 2026-01-23 11:46:59.642 185177 DEBUG nova.virt.libvirt.vif [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-23T11:46:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='test_0',display_name='test_0',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='test-0',id=1,image_ref='c5833e41-b4db-454e-8f49-014aa18c7dc5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bd16a0de2f5e4a8480a855ef0e1a3f14',ramdisk_id='',reservation_id='r-wixocgu2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,admin,reader',image_base_image_ref='c5833e41-b4db-454e-8f49-014aa18c7dc5',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T11:46:54Z,user_data=None,user_id='d9858533c2284846a8f0f19a1fb45045',uuid=55846fbf-a87a-4cba-be0b-23125d3d9ef4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4c18896b-ecf0-4d1b-b901-f24edce45c11", "address": "fa:16:3e:e4:21:a1", "network": {"id": "9d2c33ef-0f52-43b5-80dd-899657aece53", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.65", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bd16a0de2f5e4a8480a855ef0e1a3f14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c18896b-ec", "ovs_interfaceid": "4c18896b-ecf0-4d1b-b901-f24edce45c11", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 23 11:46:59 compute-0 nova_compute[185173]: 2026-01-23 11:46:59.642 185177 DEBUG nova.network.os_vif_util [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Converting VIF {"id": "4c18896b-ecf0-4d1b-b901-f24edce45c11", "address": "fa:16:3e:e4:21:a1", "network": {"id": "9d2c33ef-0f52-43b5-80dd-899657aece53", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.65", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bd16a0de2f5e4a8480a855ef0e1a3f14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c18896b-ec", "ovs_interfaceid": "4c18896b-ecf0-4d1b-b901-f24edce45c11", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 23 11:46:59 compute-0 nova_compute[185173]: 2026-01-23 11:46:59.643 185177 DEBUG nova.network.os_vif_util [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e4:21:a1,bridge_name='br-int',has_traffic_filtering=True,id=4c18896b-ecf0-4d1b-b901-f24edce45c11,network=Network(9d2c33ef-0f52-43b5-80dd-899657aece53),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4c18896b-ec') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 23 11:46:59 compute-0 nova_compute[185173]: 2026-01-23 11:46:59.645 185177 DEBUG nova.objects.instance [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Lazy-loading 'pci_devices' on Instance uuid 55846fbf-a87a-4cba-be0b-23125d3d9ef4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 23 11:46:59 compute-0 nova_compute[185173]: 2026-01-23 11:46:59.664 185177 DEBUG nova.virt.libvirt.driver [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: 55846fbf-a87a-4cba-be0b-23125d3d9ef4] End _get_guest_xml xml=<domain type="kvm">
Jan 23 11:46:59 compute-0 nova_compute[185173]:   <uuid>55846fbf-a87a-4cba-be0b-23125d3d9ef4</uuid>
Jan 23 11:46:59 compute-0 nova_compute[185173]:   <name>instance-00000001</name>
Jan 23 11:46:59 compute-0 nova_compute[185173]:   <memory>524288</memory>
Jan 23 11:46:59 compute-0 nova_compute[185173]:   <vcpu>1</vcpu>
Jan 23 11:46:59 compute-0 nova_compute[185173]:   <metadata>
Jan 23 11:46:59 compute-0 nova_compute[185173]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 23 11:46:59 compute-0 nova_compute[185173]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 23 11:46:59 compute-0 nova_compute[185173]:       <nova:name>test_0</nova:name>
Jan 23 11:46:59 compute-0 nova_compute[185173]:       <nova:creationTime>2026-01-23 11:46:59</nova:creationTime>
Jan 23 11:46:59 compute-0 nova_compute[185173]:       <nova:flavor name="m1.small">
Jan 23 11:46:59 compute-0 nova_compute[185173]:         <nova:memory>512</nova:memory>
Jan 23 11:46:59 compute-0 nova_compute[185173]:         <nova:disk>1</nova:disk>
Jan 23 11:46:59 compute-0 nova_compute[185173]:         <nova:swap>0</nova:swap>
Jan 23 11:46:59 compute-0 nova_compute[185173]:         <nova:ephemeral>1</nova:ephemeral>
Jan 23 11:46:59 compute-0 nova_compute[185173]:         <nova:vcpus>1</nova:vcpus>
Jan 23 11:46:59 compute-0 nova_compute[185173]:       </nova:flavor>
Jan 23 11:46:59 compute-0 nova_compute[185173]:       <nova:owner>
Jan 23 11:46:59 compute-0 nova_compute[185173]:         <nova:user uuid="d9858533c2284846a8f0f19a1fb45045">admin</nova:user>
Jan 23 11:46:59 compute-0 nova_compute[185173]:         <nova:project uuid="bd16a0de2f5e4a8480a855ef0e1a3f14">admin</nova:project>
Jan 23 11:46:59 compute-0 nova_compute[185173]:       </nova:owner>
Jan 23 11:46:59 compute-0 nova_compute[185173]:       <nova:root type="image" uuid="c5833e41-b4db-454e-8f49-014aa18c7dc5"/>
Jan 23 11:46:59 compute-0 nova_compute[185173]:       <nova:ports>
Jan 23 11:46:59 compute-0 nova_compute[185173]:         <nova:port uuid="4c18896b-ecf0-4d1b-b901-f24edce45c11">
Jan 23 11:46:59 compute-0 nova_compute[185173]:           <nova:ip type="fixed" address="192.168.0.65" ipVersion="4"/>
Jan 23 11:46:59 compute-0 nova_compute[185173]:         </nova:port>
Jan 23 11:46:59 compute-0 nova_compute[185173]:       </nova:ports>
Jan 23 11:46:59 compute-0 nova_compute[185173]:     </nova:instance>
Jan 23 11:46:59 compute-0 nova_compute[185173]:   </metadata>
Jan 23 11:46:59 compute-0 nova_compute[185173]:   <sysinfo type="smbios">
Jan 23 11:46:59 compute-0 nova_compute[185173]:     <system>
Jan 23 11:46:59 compute-0 nova_compute[185173]:       <entry name="manufacturer">RDO</entry>
Jan 23 11:46:59 compute-0 nova_compute[185173]:       <entry name="product">OpenStack Compute</entry>
Jan 23 11:46:59 compute-0 nova_compute[185173]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 23 11:46:59 compute-0 nova_compute[185173]:       <entry name="serial">55846fbf-a87a-4cba-be0b-23125d3d9ef4</entry>
Jan 23 11:46:59 compute-0 nova_compute[185173]:       <entry name="uuid">55846fbf-a87a-4cba-be0b-23125d3d9ef4</entry>
Jan 23 11:46:59 compute-0 nova_compute[185173]:       <entry name="family">Virtual Machine</entry>
Jan 23 11:46:59 compute-0 nova_compute[185173]:     </system>
Jan 23 11:46:59 compute-0 nova_compute[185173]:   </sysinfo>
Jan 23 11:46:59 compute-0 nova_compute[185173]:   <os>
Jan 23 11:46:59 compute-0 nova_compute[185173]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 23 11:46:59 compute-0 nova_compute[185173]:     <boot dev="hd"/>
Jan 23 11:46:59 compute-0 nova_compute[185173]:     <smbios mode="sysinfo"/>
Jan 23 11:46:59 compute-0 nova_compute[185173]:   </os>
Jan 23 11:46:59 compute-0 nova_compute[185173]:   <features>
Jan 23 11:46:59 compute-0 nova_compute[185173]:     <acpi/>
Jan 23 11:46:59 compute-0 nova_compute[185173]:     <apic/>
Jan 23 11:46:59 compute-0 nova_compute[185173]:     <vmcoreinfo/>
Jan 23 11:46:59 compute-0 nova_compute[185173]:   </features>
Jan 23 11:46:59 compute-0 nova_compute[185173]:   <clock offset="utc">
Jan 23 11:46:59 compute-0 nova_compute[185173]:     <timer name="pit" tickpolicy="delay"/>
Jan 23 11:46:59 compute-0 nova_compute[185173]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 23 11:46:59 compute-0 nova_compute[185173]:     <timer name="hpet" present="no"/>
Jan 23 11:46:59 compute-0 nova_compute[185173]:   </clock>
Jan 23 11:46:59 compute-0 nova_compute[185173]:   <cpu mode="host-model" match="exact">
Jan 23 11:46:59 compute-0 nova_compute[185173]:     <topology sockets="1" cores="1" threads="1"/>
Jan 23 11:46:59 compute-0 nova_compute[185173]:   </cpu>
Jan 23 11:46:59 compute-0 nova_compute[185173]:   <devices>
Jan 23 11:46:59 compute-0 nova_compute[185173]:     <disk type="file" device="disk">
Jan 23 11:46:59 compute-0 nova_compute[185173]:       <driver name="qemu" type="qcow2" cache="none"/>
Jan 23 11:46:59 compute-0 nova_compute[185173]:       <source file="/var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk"/>
Jan 23 11:46:59 compute-0 nova_compute[185173]:       <target dev="vda" bus="virtio"/>
Jan 23 11:46:59 compute-0 nova_compute[185173]:     </disk>
Jan 23 11:46:59 compute-0 nova_compute[185173]:     <disk type="file" device="disk">
Jan 23 11:46:59 compute-0 nova_compute[185173]:       <driver name="qemu" type="qcow2" cache="none"/>
Jan 23 11:46:59 compute-0 nova_compute[185173]:       <source file="/var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.eph0"/>
Jan 23 11:46:59 compute-0 nova_compute[185173]:       <target dev="vdb" bus="virtio"/>
Jan 23 11:46:59 compute-0 nova_compute[185173]:     </disk>
Jan 23 11:46:59 compute-0 nova_compute[185173]:     <disk type="file" device="cdrom">
Jan 23 11:46:59 compute-0 nova_compute[185173]:       <driver name="qemu" type="raw" cache="none"/>
Jan 23 11:46:59 compute-0 nova_compute[185173]:       <source file="/var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.config"/>
Jan 23 11:46:59 compute-0 nova_compute[185173]:       <target dev="sda" bus="sata"/>
Jan 23 11:46:59 compute-0 nova_compute[185173]:     </disk>
Jan 23 11:46:59 compute-0 nova_compute[185173]:     <interface type="ethernet">
Jan 23 11:46:59 compute-0 nova_compute[185173]:       <mac address="fa:16:3e:e4:21:a1"/>
Jan 23 11:46:59 compute-0 nova_compute[185173]:       <model type="virtio"/>
Jan 23 11:46:59 compute-0 nova_compute[185173]:       <driver name="vhost" rx_queue_size="512"/>
Jan 23 11:46:59 compute-0 nova_compute[185173]:       <mtu size="1442"/>
Jan 23 11:46:59 compute-0 nova_compute[185173]:       <target dev="tap4c18896b-ec"/>
Jan 23 11:46:59 compute-0 nova_compute[185173]:     </interface>
Jan 23 11:46:59 compute-0 nova_compute[185173]:     <serial type="pty">
Jan 23 11:46:59 compute-0 nova_compute[185173]:       <log file="/var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/console.log" append="off"/>
Jan 23 11:46:59 compute-0 nova_compute[185173]:     </serial>
Jan 23 11:46:59 compute-0 nova_compute[185173]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 23 11:46:59 compute-0 nova_compute[185173]:     <video>
Jan 23 11:46:59 compute-0 nova_compute[185173]:       <model type="virtio"/>
Jan 23 11:46:59 compute-0 nova_compute[185173]:     </video>
Jan 23 11:46:59 compute-0 nova_compute[185173]:     <input type="tablet" bus="usb"/>
Jan 23 11:46:59 compute-0 nova_compute[185173]:     <rng model="virtio">
Jan 23 11:46:59 compute-0 nova_compute[185173]:       <backend model="random">/dev/urandom</backend>
Jan 23 11:46:59 compute-0 nova_compute[185173]:     </rng>
Jan 23 11:46:59 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root"/>
Jan 23 11:46:59 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 11:46:59 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 11:46:59 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 11:46:59 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 11:46:59 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 11:46:59 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 11:46:59 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 11:46:59 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 11:46:59 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 11:46:59 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 11:46:59 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 11:46:59 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 11:46:59 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 11:46:59 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 11:46:59 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 11:46:59 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 11:46:59 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 11:46:59 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 11:46:59 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 11:46:59 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 11:46:59 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 11:46:59 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 11:46:59 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 11:46:59 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 11:46:59 compute-0 nova_compute[185173]:     <controller type="usb" index="0"/>
Jan 23 11:46:59 compute-0 nova_compute[185173]:     <memballoon model="virtio">
Jan 23 11:46:59 compute-0 nova_compute[185173]:       <stats period="10"/>
Jan 23 11:46:59 compute-0 nova_compute[185173]:     </memballoon>
Jan 23 11:46:59 compute-0 nova_compute[185173]:   </devices>
Jan 23 11:46:59 compute-0 nova_compute[185173]: </domain>
Jan 23 11:46:59 compute-0 nova_compute[185173]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 23 11:46:59 compute-0 nova_compute[185173]: 2026-01-23 11:46:59.665 185177 DEBUG nova.compute.manager [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: 55846fbf-a87a-4cba-be0b-23125d3d9ef4] Preparing to wait for external event network-vif-plugged-4c18896b-ecf0-4d1b-b901-f24edce45c11 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 23 11:46:59 compute-0 nova_compute[185173]: 2026-01-23 11:46:59.665 185177 DEBUG oslo_concurrency.lockutils [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Acquiring lock "55846fbf-a87a-4cba-be0b-23125d3d9ef4-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 11:46:59 compute-0 nova_compute[185173]: 2026-01-23 11:46:59.665 185177 DEBUG oslo_concurrency.lockutils [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Lock "55846fbf-a87a-4cba-be0b-23125d3d9ef4-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 11:46:59 compute-0 nova_compute[185173]: 2026-01-23 11:46:59.666 185177 DEBUG oslo_concurrency.lockutils [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Lock "55846fbf-a87a-4cba-be0b-23125d3d9ef4-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 11:46:59 compute-0 nova_compute[185173]: 2026-01-23 11:46:59.666 185177 DEBUG nova.virt.libvirt.vif [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-23T11:46:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='test_0',display_name='test_0',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='test-0',id=1,image_ref='c5833e41-b4db-454e-8f49-014aa18c7dc5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bd16a0de2f5e4a8480a855ef0e1a3f14',ramdisk_id='',reservation_id='r-wixocgu2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,admin,reader',image_base_image_ref='c5833e41-b4db-454e-8f49-014aa18c7dc5',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T11:46:54Z,user_data=None,user_id='d9858533c2284846a8f0f19a1fb45045',uuid=55846fbf-a87a-4cba-be0b-23125d3d9ef4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4c18896b-ecf0-4d1b-b901-f24edce45c11", "address": "fa:16:3e:e4:21:a1", "network": {"id": "9d2c33ef-0f52-43b5-80dd-899657aece53", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.65", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bd16a0de2f5e4a8480a855ef0e1a3f14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c18896b-ec", "ovs_interfaceid": "4c18896b-ecf0-4d1b-b901-f24edce45c11", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 23 11:46:59 compute-0 nova_compute[185173]: 2026-01-23 11:46:59.667 185177 DEBUG nova.network.os_vif_util [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Converting VIF {"id": "4c18896b-ecf0-4d1b-b901-f24edce45c11", "address": "fa:16:3e:e4:21:a1", "network": {"id": "9d2c33ef-0f52-43b5-80dd-899657aece53", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.65", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bd16a0de2f5e4a8480a855ef0e1a3f14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c18896b-ec", "ovs_interfaceid": "4c18896b-ecf0-4d1b-b901-f24edce45c11", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 23 11:46:59 compute-0 nova_compute[185173]: 2026-01-23 11:46:59.667 185177 DEBUG nova.network.os_vif_util [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e4:21:a1,bridge_name='br-int',has_traffic_filtering=True,id=4c18896b-ecf0-4d1b-b901-f24edce45c11,network=Network(9d2c33ef-0f52-43b5-80dd-899657aece53),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4c18896b-ec') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 23 11:46:59 compute-0 nova_compute[185173]: 2026-01-23 11:46:59.668 185177 DEBUG os_vif [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e4:21:a1,bridge_name='br-int',has_traffic_filtering=True,id=4c18896b-ecf0-4d1b-b901-f24edce45c11,network=Network(9d2c33ef-0f52-43b5-80dd-899657aece53),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4c18896b-ec') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 23 11:46:59 compute-0 nova_compute[185173]: 2026-01-23 11:46:59.711 185177 DEBUG ovsdbapp.backend.ovs_idl [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 23 11:46:59 compute-0 nova_compute[185173]: 2026-01-23 11:46:59.711 185177 DEBUG ovsdbapp.backend.ovs_idl [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 23 11:46:59 compute-0 nova_compute[185173]: 2026-01-23 11:46:59.711 185177 DEBUG ovsdbapp.backend.ovs_idl [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 23 11:46:59 compute-0 nova_compute[185173]: 2026-01-23 11:46:59.712 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 23 11:46:59 compute-0 nova_compute[185173]: 2026-01-23 11:46:59.713 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [POLLOUT] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:46:59 compute-0 nova_compute[185173]: 2026-01-23 11:46:59.713 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 23 11:46:59 compute-0 nova_compute[185173]: 2026-01-23 11:46:59.714 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:46:59 compute-0 nova_compute[185173]: 2026-01-23 11:46:59.716 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:46:59 compute-0 nova_compute[185173]: 2026-01-23 11:46:59.719 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:46:59 compute-0 nova_compute[185173]: 2026-01-23 11:46:59.731 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:46:59 compute-0 nova_compute[185173]: 2026-01-23 11:46:59.732 185177 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 23 11:46:59 compute-0 nova_compute[185173]: 2026-01-23 11:46:59.732 185177 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 23 11:46:59 compute-0 nova_compute[185173]: 2026-01-23 11:46:59.733 185177 INFO oslo.privsep.daemon [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmpqujk5k5v/privsep.sock']
Jan 23 11:46:59 compute-0 podman[201022]: time="2026-01-23T11:46:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 23 11:46:59 compute-0 podman[201022]: @ - - [23/Jan/2026:11:46:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 23 11:46:59 compute-0 podman[201022]: @ - - [23/Jan/2026:11:46:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3857 "" "Go-http-client/1.1"
Jan 23 11:47:00 compute-0 nova_compute[185173]: 2026-01-23 11:47:00.387 185177 INFO oslo.privsep.daemon [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Spawned new privsep daemon via rootwrap
Jan 23 11:47:00 compute-0 nova_compute[185173]: 2026-01-23 11:47:00.281 238190 INFO oslo.privsep.daemon [-] privsep daemon starting
Jan 23 11:47:00 compute-0 nova_compute[185173]: 2026-01-23 11:47:00.287 238190 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Jan 23 11:47:00 compute-0 nova_compute[185173]: 2026-01-23 11:47:00.291 238190 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none
Jan 23 11:47:00 compute-0 nova_compute[185173]: 2026-01-23 11:47:00.292 238190 INFO oslo.privsep.daemon [-] privsep daemon running as pid 238190
Jan 23 11:47:00 compute-0 nova_compute[185173]: 2026-01-23 11:47:00.698 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:47:00 compute-0 nova_compute[185173]: 2026-01-23 11:47:00.698 185177 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4c18896b-ec, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 23 11:47:00 compute-0 nova_compute[185173]: 2026-01-23 11:47:00.699 185177 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4c18896b-ec, col_values=(('external_ids', {'iface-id': '4c18896b-ecf0-4d1b-b901-f24edce45c11', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e4:21:a1', 'vm-uuid': '55846fbf-a87a-4cba-be0b-23125d3d9ef4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 23 11:47:00 compute-0 nova_compute[185173]: 2026-01-23 11:47:00.701 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:47:00 compute-0 NetworkManager[56133]: <info>  [1769168820.7031] manager: (tap4c18896b-ec): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Jan 23 11:47:00 compute-0 nova_compute[185173]: 2026-01-23 11:47:00.704 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 23 11:47:00 compute-0 nova_compute[185173]: 2026-01-23 11:47:00.710 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:47:00 compute-0 nova_compute[185173]: 2026-01-23 11:47:00.711 185177 INFO os_vif [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e4:21:a1,bridge_name='br-int',has_traffic_filtering=True,id=4c18896b-ecf0-4d1b-b901-f24edce45c11,network=Network(9d2c33ef-0f52-43b5-80dd-899657aece53),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4c18896b-ec')
Jan 23 11:47:00 compute-0 nova_compute[185173]: 2026-01-23 11:47:00.831 185177 DEBUG nova.virt.libvirt.driver [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 23 11:47:00 compute-0 nova_compute[185173]: 2026-01-23 11:47:00.832 185177 DEBUG nova.virt.libvirt.driver [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 23 11:47:00 compute-0 nova_compute[185173]: 2026-01-23 11:47:00.832 185177 DEBUG nova.virt.libvirt.driver [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 23 11:47:00 compute-0 nova_compute[185173]: 2026-01-23 11:47:00.833 185177 DEBUG nova.virt.libvirt.driver [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] No VIF found with MAC fa:16:3e:e4:21:a1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 23 11:47:00 compute-0 nova_compute[185173]: 2026-01-23 11:47:00.833 185177 INFO nova.virt.libvirt.driver [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: 55846fbf-a87a-4cba-be0b-23125d3d9ef4] Using config drive
Jan 23 11:47:01 compute-0 nova_compute[185173]: 2026-01-23 11:47:01.387 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:47:01 compute-0 openstack_network_exporter[204160]: ERROR   11:47:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 23 11:47:01 compute-0 openstack_network_exporter[204160]: 
Jan 23 11:47:01 compute-0 openstack_network_exporter[204160]: ERROR   11:47:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 23 11:47:01 compute-0 openstack_network_exporter[204160]: 
Jan 23 11:47:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:01.450 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 23 11:47:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:01.451 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 23 11:47:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:01.451 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2842f83470>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:47:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:01.452 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f28410bc7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:47:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:01.452 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be810>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2842f83470>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:47:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:01.452 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be840>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2842f83470>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:47:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:01.452 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2842f83470>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:47:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:01.452 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be8a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2842f83470>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:47:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:01.453 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc8f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2842f83470>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:47:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:01.453 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be900>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2842f83470>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:47:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:01.453 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bf140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2842f83470>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:47:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:01.453 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be960>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2842f83470>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:47:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:01.453 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2842f61190>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2842f83470>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:47:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:01.453 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28411c9190>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2842f83470>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:47:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:01.453 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be9c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2842f83470>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:47:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:01.454 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bf1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2842f83470>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:47:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:01.454 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bec00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2842f83470>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:47:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:01.454 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bf440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2842f83470>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:47:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:01.454 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bec60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2842f83470>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:47:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:01.454 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2842f83560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2842f83470>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:47:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:01.454 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2842f83470>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:47:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:01.454 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc5c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2842f83470>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:47:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:01.454 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2842f83470>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:47:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:01.455 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be660>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2842f83470>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:47:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:01.455 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2842f83470>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:47:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:01.455 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2842f83470>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:47:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:01.455 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2842f1af60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2842f83470>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:47:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:01.455 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2842f83470>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:47:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:01.455 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be7b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2842f83470>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:47:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:01.456 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 55846fbf-a87a-4cba-be0b-23125d3d9ef4 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Jan 23 11:47:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:01.816 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/55846fbf-a87a-4cba-be0b-23125d3d9ef4 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}ad70b57d9194f6532b182b578b16289681d355eb6a1afd27a70859dd1387cbc9" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Jan 23 11:47:02 compute-0 nova_compute[185173]: 2026-01-23 11:47:02.598 185177 DEBUG nova.network.neutron [req-4dca4da2-4f4c-4b1e-b166-992efcbfc040 req-cb42442f-8ec7-45ae-ac72-0b4c3a371698 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] [instance: 55846fbf-a87a-4cba-be0b-23125d3d9ef4] Updated VIF entry in instance network info cache for port 4c18896b-ecf0-4d1b-b901-f24edce45c11. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 23 11:47:02 compute-0 nova_compute[185173]: 2026-01-23 11:47:02.599 185177 DEBUG nova.network.neutron [req-4dca4da2-4f4c-4b1e-b166-992efcbfc040 req-cb42442f-8ec7-45ae-ac72-0b4c3a371698 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] [instance: 55846fbf-a87a-4cba-be0b-23125d3d9ef4] Updating instance_info_cache with network_info: [{"id": "4c18896b-ecf0-4d1b-b901-f24edce45c11", "address": "fa:16:3e:e4:21:a1", "network": {"id": "9d2c33ef-0f52-43b5-80dd-899657aece53", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.65", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bd16a0de2f5e4a8480a855ef0e1a3f14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c18896b-ec", "ovs_interfaceid": "4c18896b-ecf0-4d1b-b901-f24edce45c11", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 23 11:47:02 compute-0 nova_compute[185173]: 2026-01-23 11:47:02.616 185177 DEBUG oslo_concurrency.lockutils [req-4dca4da2-4f4c-4b1e-b166-992efcbfc040 req-cb42442f-8ec7-45ae-ac72-0b4c3a371698 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] Releasing lock "refresh_cache-55846fbf-a87a-4cba-be0b-23125d3d9ef4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 23 11:47:02 compute-0 podman[238197]: 2026-01-23 11:47:02.741141888 +0000 UTC m=+0.073283157 container health_status cde20f10ae383cce1365a41265bac0a75ea71c31a21a1539f187bef9d678e8d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, architecture=x86_64, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., io.openshift.expose-services=, io.buildah.version=1.33.7, config_id=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, release=1755695350, version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git)
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.803 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1579 Content-Type: application/json Date: Fri, 23 Jan 2026 11:47:01 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-9b9bc24e-5dd9-473e-bede-b26f7d1dc1fd x-openstack-request-id: req-9b9bc24e-5dd9-473e-bede-b26f7d1dc1fd _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.803 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "55846fbf-a87a-4cba-be0b-23125d3d9ef4", "name": "test_0", "status": "BUILD", "tenant_id": "bd16a0de2f5e4a8480a855ef0e1a3f14", "user_id": "d9858533c2284846a8f0f19a1fb45045", "metadata": {}, "hostId": "47f89b8956aaa9163f724166aabd4216eadbb2bd951d24f4c87e1ecb", "image": {"id": "c5833e41-b4db-454e-8f49-014aa18c7dc5", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/c5833e41-b4db-454e-8f49-014aa18c7dc5"}]}, "flavor": {"id": "f2c5c5dd-a580-4885-a3ab-a766eac401c8", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/f2c5c5dd-a580-4885-a3ab-a766eac401c8"}]}, "created": "2026-01-23T11:46:51Z", "updated": "2026-01-23T11:46:54Z", "addresses": {}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/55846fbf-a87a-4cba-be0b-23125d3d9ef4"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/55846fbf-a87a-4cba-be0b-23125d3d9ef4"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "", "key_name": null, "OS-SRV-USG:launched_at": null, "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "basic"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000001", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": "spawning", "OS-EXT-STS:vm_state": "building", "OS-EXT-STS:power_state": 0, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.803 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/55846fbf-a87a-4cba-be0b-23125d3d9ef4 used request id req-9b9bc24e-5dd9-473e-bede-b26f7d1dc1fd request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.805 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '55846fbf-a87a-4cba-be0b-23125d3d9ef4', 'name': 'test_0', 'flavor': {'id': 'f2c5c5dd-a580-4885-a3ab-a766eac401c8', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'c5833e41-b4db-454e-8f49-014aa18c7dc5'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'shutdown', 'tenant_id': 'bd16a0de2f5e4a8480a855ef0e1a3f14', 'user_id': 'd9858533c2284846a8f0f19a1fb45045', 'hostId': '47f89b8956aaa9163f724166aabd4216eadbb2bd951d24f4c87e1ecb', 'status': 'stopped', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.806 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.806 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bc800>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.806 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bc800>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.806 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.807 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-01-23T11:47:02.806342) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.809 14 DEBUG ceilometer.compute.pollsters [-] Instance 55846fbf-a87a-4cba-be0b-23125d3d9ef4 was shut off while getting sample of network.outgoing.bytes.delta: Failed to inspect data of instance <name=instance-00000001, id=55846fbf-a87a-4cba-be0b-23125d3d9ef4>, domain state is SHUTOFF. get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:151
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.809 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.809 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f28410be7e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.809 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.809 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410be810>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.809 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410be810>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.809 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.810 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-01-23T11:47:02.809665) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.810 14 DEBUG ceilometer.compute.pollsters [-] Instance 55846fbf-a87a-4cba-be0b-23125d3d9ef4 was shut off while getting sample of disk.device.usage: Failed to inspect data of instance <name=instance-00000001, id=55846fbf-a87a-4cba-be0b-23125d3d9ef4>, domain state is SHUTOFF. get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:151
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.811 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.811 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f28411c9b80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.811 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.811 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410be840>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.811 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410be840>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.811 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.811 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-01-23T11:47:02.811536) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.812 14 DEBUG ceilometer.compute.pollsters [-] Instance 55846fbf-a87a-4cba-be0b-23125d3d9ef4 was shut off while getting sample of disk.device.write.bytes: Failed to inspect data of instance <name=instance-00000001, id=55846fbf-a87a-4cba-be0b-23125d3d9ef4>, domain state is SHUTOFF. get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:151
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.813 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.813 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f28410bc830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.813 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.813 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bc860>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.813 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bc860>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.813 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.813 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.813 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: test_0>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: test_0>]
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.814 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f28410be870>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.814 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.814 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410be8a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.814 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410be8a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.815 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.815 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2026-01-23T11:47:02.813466) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.815 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-01-23T11:47:02.815091) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.816 14 DEBUG ceilometer.compute.pollsters [-] Instance 55846fbf-a87a-4cba-be0b-23125d3d9ef4 was shut off while getting sample of disk.device.write.latency: Failed to inspect data of instance <name=instance-00000001, id=55846fbf-a87a-4cba-be0b-23125d3d9ef4>, domain state is SHUTOFF. get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:151
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.816 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.816 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f28410bc8c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.816 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.816 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bc8f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.816 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bc8f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.816 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.817 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-01-23T11:47:02.816913) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.818 14 DEBUG ceilometer.compute.pollsters [-] Instance 55846fbf-a87a-4cba-be0b-23125d3d9ef4 was shut off while getting sample of network.incoming.packets.error: Failed to inspect data of instance <name=instance-00000001, id=55846fbf-a87a-4cba-be0b-23125d3d9ef4>, domain state is SHUTOFF. get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:151
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.818 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.818 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f28410be8d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.818 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.818 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410be900>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.818 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410be900>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.818 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.819 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-01-23T11:47:02.818940) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.820 14 DEBUG ceilometer.compute.pollsters [-] Instance 55846fbf-a87a-4cba-be0b-23125d3d9ef4 was shut off while getting sample of disk.device.write.requests: Failed to inspect data of instance <name=instance-00000001, id=55846fbf-a87a-4cba-be0b-23125d3d9ef4>, domain state is SHUTOFF. get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:151
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.820 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.820 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f28410bef30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.820 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.820 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bf140>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.820 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bf140>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.821 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.821 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-01-23T11:47:02.821026) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.822 14 DEBUG ceilometer.compute.pollsters [-] Instance 55846fbf-a87a-4cba-be0b-23125d3d9ef4 was shut off while getting sample of network.outgoing.packets.drop: Failed to inspect data of instance <name=instance-00000001, id=55846fbf-a87a-4cba-be0b-23125d3d9ef4>, domain state is SHUTOFF. get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:151
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.822 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.822 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f28410be930>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.822 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.822 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410be960>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.823 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410be960>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.823 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.823 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-01-23T11:47:02.823120) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.824 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.824 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f28410be750>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.824 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.824 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f2842f61190>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.824 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f2842f61190>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.825 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.825 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-01-23T11:47:02.825084) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.826 14 DEBUG ceilometer.compute.pollsters [-] Instance 55846fbf-a87a-4cba-be0b-23125d3d9ef4 was shut off while getting sample of disk.device.read.latency: Failed to inspect data of instance <name=instance-00000001, id=55846fbf-a87a-4cba-be0b-23125d3d9ef4>, domain state is SHUTOFF. get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:151
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.826 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.826 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f28411a4c50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.826 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.826 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28411c9190>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.827 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28411c9190>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.827 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.827 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-01-23T11:47:02.827084) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.828 14 DEBUG ceilometer.compute.pollsters [-] Instance 55846fbf-a87a-4cba-be0b-23125d3d9ef4 was shut off while getting sample of disk.device.allocation: Failed to inspect data of instance <name=instance-00000001, id=55846fbf-a87a-4cba-be0b-23125d3d9ef4>, domain state is SHUTOFF. get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:151
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.828 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.828 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f28410be990>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.828 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.828 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410be9c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.828 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410be9c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.828 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.829 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.829 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f28410bf1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.829 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.829 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bf1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.829 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bf1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.829 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.830 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-01-23T11:47:02.828834) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.830 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-01-23T11:47:02.829759) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.830 14 DEBUG ceilometer.compute.pollsters [-] Instance 55846fbf-a87a-4cba-be0b-23125d3d9ef4 was shut off while getting sample of network.outgoing.packets.error: Failed to inspect data of instance <name=instance-00000001, id=55846fbf-a87a-4cba-be0b-23125d3d9ef4>, domain state is SHUTOFF. get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:151
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.831 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.831 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f28410bebd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.831 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.831 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bec00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.831 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bec00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.831 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.831 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-01-23T11:47:02.831449) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.832 14 DEBUG ceilometer.compute.pollsters [-] Instance 55846fbf-a87a-4cba-be0b-23125d3d9ef4 was shut off while getting sample of memory.usage: Failed to inspect data of instance <name=instance-00000001, id=55846fbf-a87a-4cba-be0b-23125d3d9ef4>, domain state is SHUTOFF. get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:151
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.832 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.832 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f28410bf410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.833 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.833 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bf440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.833 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bf440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.833 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.833 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-01-23T11:47:02.833383) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.834 14 DEBUG ceilometer.compute.pollsters [-] Instance 55846fbf-a87a-4cba-be0b-23125d3d9ef4 was shut off while getting sample of power.state: Failed to inspect data of instance <name=instance-00000001, id=55846fbf-a87a-4cba-be0b-23125d3d9ef4>, domain state is SHUTOFF. get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:151
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.834 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.834 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f28410bec30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.834 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.835 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bec60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.835 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bec60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.835 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.835 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-01-23T11:47:02.835196) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.836 14 DEBUG ceilometer.compute.pollsters [-] Instance 55846fbf-a87a-4cba-be0b-23125d3d9ef4 was shut off while getting sample of network.incoming.bytes: Failed to inspect data of instance <name=instance-00000001, id=55846fbf-a87a-4cba-be0b-23125d3d9ef4>, domain state is SHUTOFF. get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:151
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.836 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.836 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f28410bcfb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.836 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.837 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f2842f83560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.837 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f2842f83560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.837 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.837 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-01-23T11:47:02.837207) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.838 14 DEBUG ceilometer.compute.pollsters [-] Instance 55846fbf-a87a-4cba-be0b-23125d3d9ef4 was shut off while getting sample of cpu: Failed to inspect data of instance <name=instance-00000001, id=55846fbf-a87a-4cba-be0b-23125d3d9ef4>, domain state is SHUTOFF. get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:151
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.838 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.838 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f28410bc920>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.838 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.838 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bc590>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.839 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bc590>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.839 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.839 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.839 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: test_0>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: test_0>]
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.839 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2026-01-23T11:47:02.839158) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.839 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f28410bc5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.840 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.840 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bc5c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.840 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bc5c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.840 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.840 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-01-23T11:47:02.840258) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.841 14 DEBUG ceilometer.compute.pollsters [-] Instance 55846fbf-a87a-4cba-be0b-23125d3d9ef4 was shut off while getting sample of network.incoming.packets: Failed to inspect data of instance <name=instance-00000001, id=55846fbf-a87a-4cba-be0b-23125d3d9ef4>, domain state is SHUTOFF. get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:151
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.841 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.841 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f28410bc890>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.842 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.842 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bc650>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.842 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bc650>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.842 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.842 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-01-23T11:47:02.842350) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.843 14 DEBUG ceilometer.compute.pollsters [-] Instance 55846fbf-a87a-4cba-be0b-23125d3d9ef4 was shut off while getting sample of network.incoming.packets.drop: Failed to inspect data of instance <name=instance-00000001, id=55846fbf-a87a-4cba-be0b-23125d3d9ef4>, domain state is SHUTOFF. get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:151
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.843 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.843 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f28410be720>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.843 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.844 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410be660>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.844 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410be660>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.844 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.844 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-01-23T11:47:02.844161) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.845 14 DEBUG ceilometer.compute.pollsters [-] Instance 55846fbf-a87a-4cba-be0b-23125d3d9ef4 was shut off while getting sample of disk.device.read.bytes: Failed to inspect data of instance <name=instance-00000001, id=55846fbf-a87a-4cba-be0b-23125d3d9ef4>, domain state is SHUTOFF. get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:151
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.845 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.845 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f28410bc6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.845 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.845 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bc680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.846 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bc680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.846 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.846 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-01-23T11:47:02.846101) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.847 14 DEBUG ceilometer.compute.pollsters [-] Instance 55846fbf-a87a-4cba-be0b-23125d3d9ef4 was shut off while getting sample of network.outgoing.packets: Failed to inspect data of instance <name=instance-00000001, id=55846fbf-a87a-4cba-be0b-23125d3d9ef4>, domain state is SHUTOFF. get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:151
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.847 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.847 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f28410bec90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.847 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.847 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bc6e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.848 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bc6e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.848 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.848 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-01-23T11:47:02.848149) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.849 14 DEBUG ceilometer.compute.pollsters [-] Instance 55846fbf-a87a-4cba-be0b-23125d3d9ef4 was shut off while getting sample of network.incoming.bytes.delta: Failed to inspect data of instance <name=instance-00000001, id=55846fbf-a87a-4cba-be0b-23125d3d9ef4>, domain state is SHUTOFF. get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:151
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.849 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.849 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f284322b260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.849 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.850 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f2842f1af60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.850 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f2842f1af60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.850 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.850 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-01-23T11:47:02.850159) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.851 14 DEBUG ceilometer.compute.pollsters [-] Instance 55846fbf-a87a-4cba-be0b-23125d3d9ef4 was shut off while getting sample of disk.device.capacity: Failed to inspect data of instance <name=instance-00000001, id=55846fbf-a87a-4cba-be0b-23125d3d9ef4>, domain state is SHUTOFF. get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:151
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.851 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.852 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f28410bc740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.852 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.852 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bc770>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.852 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bc770>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.852 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.852 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-01-23T11:47:02.852388) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.853 14 DEBUG ceilometer.compute.pollsters [-] Instance 55846fbf-a87a-4cba-be0b-23125d3d9ef4 was shut off while getting sample of network.outgoing.bytes: Failed to inspect data of instance <name=instance-00000001, id=55846fbf-a87a-4cba-be0b-23125d3d9ef4>, domain state is SHUTOFF. get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:151
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.854 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.854 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f28410be780>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.854 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.854 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410be7b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.854 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410be7b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.854 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.854 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-01-23T11:47:02.854468) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.855 14 DEBUG ceilometer.compute.pollsters [-] Instance 55846fbf-a87a-4cba-be0b-23125d3d9ef4 was shut off while getting sample of disk.device.read.requests: Failed to inspect data of instance <name=instance-00000001, id=55846fbf-a87a-4cba-be0b-23125d3d9ef4>, domain state is SHUTOFF. get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:151
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.855 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.856 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.856 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.856 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.856 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.856 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.856 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.856 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.856 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.856 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.856 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.856 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.856 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.857 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.857 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.857 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.857 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.857 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.857 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.857 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.857 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.857 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.857 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.857 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.857 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.857 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:47:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:47:02.857 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:47:03 compute-0 nova_compute[185173]: 2026-01-23 11:47:03.207 185177 INFO nova.virt.libvirt.driver [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: 55846fbf-a87a-4cba-be0b-23125d3d9ef4] Creating config drive at /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.config
Jan 23 11:47:03 compute-0 nova_compute[185173]: 2026-01-23 11:47:03.212 185177 DEBUG oslo_concurrency.processutils [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmbhar0v5 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:47:03 compute-0 nova_compute[185173]: 2026-01-23 11:47:03.334 185177 DEBUG oslo_concurrency.processutils [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmbhar0v5" returned: 0 in 0.122s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:47:03 compute-0 kernel: tun: Universal TUN/TAP device driver, 1.6
Jan 23 11:47:03 compute-0 NetworkManager[56133]: <info>  [1769168823.4154] manager: (tap4c18896b-ec): new Tun device (/org/freedesktop/NetworkManager/Devices/20)
Jan 23 11:47:03 compute-0 kernel: tap4c18896b-ec: entered promiscuous mode
Jan 23 11:47:03 compute-0 ovn_controller[97581]: 2026-01-23T11:47:03Z|00027|binding|INFO|Claiming lport 4c18896b-ecf0-4d1b-b901-f24edce45c11 for this chassis.
Jan 23 11:47:03 compute-0 ovn_controller[97581]: 2026-01-23T11:47:03Z|00028|binding|INFO|4c18896b-ecf0-4d1b-b901-f24edce45c11: Claiming fa:16:3e:e4:21:a1 192.168.0.65
Jan 23 11:47:03 compute-0 nova_compute[185173]: 2026-01-23 11:47:03.418 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:47:03 compute-0 nova_compute[185173]: 2026-01-23 11:47:03.422 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:47:03 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:47:03.437 106832 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e4:21:a1 192.168.0.65'], port_security=['fa:16:3e:e4:21:a1 192.168.0.65'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '192.168.0.65/24', 'neutron:device_id': '55846fbf-a87a-4cba-be0b-23125d3d9ef4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9d2c33ef-0f52-43b5-80dd-899657aece53', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bd16a0de2f5e4a8480a855ef0e1a3f14', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd2fa655b-b17a-4411-ab93-c6585edc77dc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=488b21ee-cabd-4ebf-9089-c8262ea2e5e6, chassis=[<ovs.db.idl.Row object at 0x7fceaba80790>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fceaba80790>], logical_port=4c18896b-ecf0-4d1b-b901-f24edce45c11) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 23 11:47:03 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:47:03.440 106832 INFO neutron.agent.ovn.metadata.agent [-] Port 4c18896b-ecf0-4d1b-b901-f24edce45c11 in datapath 9d2c33ef-0f52-43b5-80dd-899657aece53 bound to our chassis
Jan 23 11:47:03 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:47:03.443 106832 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9d2c33ef-0f52-43b5-80dd-899657aece53
Jan 23 11:47:03 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:47:03.444 106832 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmp3ep_lyt7/privsep.sock']
Jan 23 11:47:03 compute-0 systemd-udevd[238242]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 11:47:03 compute-0 NetworkManager[56133]: <info>  [1769168823.4728] device (tap4c18896b-ec): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 23 11:47:03 compute-0 NetworkManager[56133]: <info>  [1769168823.4734] device (tap4c18896b-ec): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 23 11:47:03 compute-0 systemd-machined[156550]: New machine qemu-1-instance-00000001.
Jan 23 11:47:03 compute-0 nova_compute[185173]: 2026-01-23 11:47:03.491 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:47:03 compute-0 systemd[1]: Started Virtual Machine qemu-1-instance-00000001.
Jan 23 11:47:03 compute-0 ovn_controller[97581]: 2026-01-23T11:47:03Z|00029|binding|INFO|Setting lport 4c18896b-ecf0-4d1b-b901-f24edce45c11 ovn-installed in OVS
Jan 23 11:47:03 compute-0 ovn_controller[97581]: 2026-01-23T11:47:03Z|00030|binding|INFO|Setting lport 4c18896b-ecf0-4d1b-b901-f24edce45c11 up in Southbound
Jan 23 11:47:03 compute-0 nova_compute[185173]: 2026-01-23 11:47:03.501 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:47:03 compute-0 nova_compute[185173]: 2026-01-23 11:47:03.794 185177 DEBUG nova.virt.driver [None req-161a4fc3-746d-453a-ae54-40285a29550b - - - - - -] Emitting event <LifecycleEvent: 1769168823.7941172, 55846fbf-a87a-4cba-be0b-23125d3d9ef4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 23 11:47:03 compute-0 nova_compute[185173]: 2026-01-23 11:47:03.795 185177 INFO nova.compute.manager [None req-161a4fc3-746d-453a-ae54-40285a29550b - - - - - -] [instance: 55846fbf-a87a-4cba-be0b-23125d3d9ef4] VM Started (Lifecycle Event)
Jan 23 11:47:03 compute-0 nova_compute[185173]: 2026-01-23 11:47:03.845 185177 DEBUG nova.compute.manager [None req-161a4fc3-746d-453a-ae54-40285a29550b - - - - - -] [instance: 55846fbf-a87a-4cba-be0b-23125d3d9ef4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 23 11:47:03 compute-0 nova_compute[185173]: 2026-01-23 11:47:03.851 185177 DEBUG nova.virt.driver [None req-161a4fc3-746d-453a-ae54-40285a29550b - - - - - -] Emitting event <LifecycleEvent: 1769168823.7942107, 55846fbf-a87a-4cba-be0b-23125d3d9ef4 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 23 11:47:03 compute-0 nova_compute[185173]: 2026-01-23 11:47:03.852 185177 INFO nova.compute.manager [None req-161a4fc3-746d-453a-ae54-40285a29550b - - - - - -] [instance: 55846fbf-a87a-4cba-be0b-23125d3d9ef4] VM Paused (Lifecycle Event)
Jan 23 11:47:03 compute-0 nova_compute[185173]: 2026-01-23 11:47:03.896 185177 DEBUG nova.compute.manager [None req-161a4fc3-746d-453a-ae54-40285a29550b - - - - - -] [instance: 55846fbf-a87a-4cba-be0b-23125d3d9ef4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 23 11:47:03 compute-0 nova_compute[185173]: 2026-01-23 11:47:03.902 185177 DEBUG nova.compute.manager [None req-161a4fc3-746d-453a-ae54-40285a29550b - - - - - -] [instance: 55846fbf-a87a-4cba-be0b-23125d3d9ef4] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 23 11:47:03 compute-0 nova_compute[185173]: 2026-01-23 11:47:03.921 185177 INFO nova.compute.manager [None req-161a4fc3-746d-453a-ae54-40285a29550b - - - - - -] [instance: 55846fbf-a87a-4cba-be0b-23125d3d9ef4] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 23 11:47:04 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:47:04.080 106832 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Jan 23 11:47:04 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:47:04.082 106832 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp3ep_lyt7/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Jan 23 11:47:04 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:47:03.964 238267 INFO oslo.privsep.daemon [-] privsep daemon starting
Jan 23 11:47:04 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:47:03.968 238267 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Jan 23 11:47:04 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:47:03.970 238267 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none
Jan 23 11:47:04 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:47:03.970 238267 INFO oslo.privsep.daemon [-] privsep daemon running as pid 238267
Jan 23 11:47:04 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:47:04.085 238267 DEBUG oslo.privsep.daemon [-] privsep: reply[f4053c73-c927-4cf3-ae9a-0f52ff52b8f7]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 11:47:04 compute-0 nova_compute[185173]: 2026-01-23 11:47:04.253 185177 DEBUG nova.compute.manager [req-ba65cff6-d530-4ba6-ba76-40019edae2df req-d0df42c1-a0d7-432a-bba3-f7ac0487bf45 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] [instance: 55846fbf-a87a-4cba-be0b-23125d3d9ef4] Received event network-vif-plugged-4c18896b-ecf0-4d1b-b901-f24edce45c11 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 23 11:47:04 compute-0 nova_compute[185173]: 2026-01-23 11:47:04.255 185177 DEBUG oslo_concurrency.lockutils [req-ba65cff6-d530-4ba6-ba76-40019edae2df req-d0df42c1-a0d7-432a-bba3-f7ac0487bf45 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] Acquiring lock "55846fbf-a87a-4cba-be0b-23125d3d9ef4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 11:47:04 compute-0 nova_compute[185173]: 2026-01-23 11:47:04.256 185177 DEBUG oslo_concurrency.lockutils [req-ba65cff6-d530-4ba6-ba76-40019edae2df req-d0df42c1-a0d7-432a-bba3-f7ac0487bf45 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] Lock "55846fbf-a87a-4cba-be0b-23125d3d9ef4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 11:47:04 compute-0 nova_compute[185173]: 2026-01-23 11:47:04.256 185177 DEBUG oslo_concurrency.lockutils [req-ba65cff6-d530-4ba6-ba76-40019edae2df req-d0df42c1-a0d7-432a-bba3-f7ac0487bf45 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] Lock "55846fbf-a87a-4cba-be0b-23125d3d9ef4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 11:47:04 compute-0 nova_compute[185173]: 2026-01-23 11:47:04.257 185177 DEBUG nova.compute.manager [req-ba65cff6-d530-4ba6-ba76-40019edae2df req-d0df42c1-a0d7-432a-bba3-f7ac0487bf45 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] [instance: 55846fbf-a87a-4cba-be0b-23125d3d9ef4] Processing event network-vif-plugged-4c18896b-ecf0-4d1b-b901-f24edce45c11 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 23 11:47:04 compute-0 nova_compute[185173]: 2026-01-23 11:47:04.259 185177 DEBUG nova.compute.manager [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: 55846fbf-a87a-4cba-be0b-23125d3d9ef4] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 23 11:47:04 compute-0 nova_compute[185173]: 2026-01-23 11:47:04.266 185177 DEBUG nova.virt.driver [None req-161a4fc3-746d-453a-ae54-40285a29550b - - - - - -] Emitting event <LifecycleEvent: 1769168824.2657378, 55846fbf-a87a-4cba-be0b-23125d3d9ef4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 23 11:47:04 compute-0 nova_compute[185173]: 2026-01-23 11:47:04.266 185177 INFO nova.compute.manager [None req-161a4fc3-746d-453a-ae54-40285a29550b - - - - - -] [instance: 55846fbf-a87a-4cba-be0b-23125d3d9ef4] VM Resumed (Lifecycle Event)
Jan 23 11:47:04 compute-0 nova_compute[185173]: 2026-01-23 11:47:04.270 185177 DEBUG nova.virt.libvirt.driver [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: 55846fbf-a87a-4cba-be0b-23125d3d9ef4] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 23 11:47:04 compute-0 nova_compute[185173]: 2026-01-23 11:47:04.276 185177 INFO nova.virt.libvirt.driver [-] [instance: 55846fbf-a87a-4cba-be0b-23125d3d9ef4] Instance spawned successfully.
Jan 23 11:47:04 compute-0 nova_compute[185173]: 2026-01-23 11:47:04.278 185177 DEBUG nova.virt.libvirt.driver [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: 55846fbf-a87a-4cba-be0b-23125d3d9ef4] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 23 11:47:04 compute-0 nova_compute[185173]: 2026-01-23 11:47:04.316 185177 DEBUG nova.compute.manager [None req-161a4fc3-746d-453a-ae54-40285a29550b - - - - - -] [instance: 55846fbf-a87a-4cba-be0b-23125d3d9ef4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 23 11:47:04 compute-0 nova_compute[185173]: 2026-01-23 11:47:04.323 185177 DEBUG nova.compute.manager [None req-161a4fc3-746d-453a-ae54-40285a29550b - - - - - -] [instance: 55846fbf-a87a-4cba-be0b-23125d3d9ef4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 23 11:47:04 compute-0 nova_compute[185173]: 2026-01-23 11:47:04.366 185177 INFO nova.compute.manager [None req-161a4fc3-746d-453a-ae54-40285a29550b - - - - - -] [instance: 55846fbf-a87a-4cba-be0b-23125d3d9ef4] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 23 11:47:04 compute-0 nova_compute[185173]: 2026-01-23 11:47:04.395 185177 DEBUG nova.virt.libvirt.driver [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: 55846fbf-a87a-4cba-be0b-23125d3d9ef4] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 23 11:47:04 compute-0 nova_compute[185173]: 2026-01-23 11:47:04.395 185177 DEBUG nova.virt.libvirt.driver [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: 55846fbf-a87a-4cba-be0b-23125d3d9ef4] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 23 11:47:04 compute-0 nova_compute[185173]: 2026-01-23 11:47:04.396 185177 DEBUG nova.virt.libvirt.driver [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: 55846fbf-a87a-4cba-be0b-23125d3d9ef4] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 23 11:47:04 compute-0 nova_compute[185173]: 2026-01-23 11:47:04.396 185177 DEBUG nova.virt.libvirt.driver [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: 55846fbf-a87a-4cba-be0b-23125d3d9ef4] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 23 11:47:04 compute-0 nova_compute[185173]: 2026-01-23 11:47:04.397 185177 DEBUG nova.virt.libvirt.driver [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: 55846fbf-a87a-4cba-be0b-23125d3d9ef4] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 23 11:47:04 compute-0 nova_compute[185173]: 2026-01-23 11:47:04.397 185177 DEBUG nova.virt.libvirt.driver [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: 55846fbf-a87a-4cba-be0b-23125d3d9ef4] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 23 11:47:04 compute-0 nova_compute[185173]: 2026-01-23 11:47:04.465 185177 INFO nova.compute.manager [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: 55846fbf-a87a-4cba-be0b-23125d3d9ef4] Took 9.47 seconds to spawn the instance on the hypervisor.
Jan 23 11:47:04 compute-0 nova_compute[185173]: 2026-01-23 11:47:04.466 185177 DEBUG nova.compute.manager [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: 55846fbf-a87a-4cba-be0b-23125d3d9ef4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 23 11:47:04 compute-0 nova_compute[185173]: 2026-01-23 11:47:04.566 185177 INFO nova.compute.manager [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: 55846fbf-a87a-4cba-be0b-23125d3d9ef4] Took 10.24 seconds to build instance.
Jan 23 11:47:04 compute-0 nova_compute[185173]: 2026-01-23 11:47:04.604 185177 DEBUG oslo_concurrency.lockutils [None req-1cd668ed-eed0-442b-9c9e-dcc093798310 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Lock "55846fbf-a87a-4cba-be0b-23125d3d9ef4" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.399s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 11:47:04 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:47:04.606 238267 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 11:47:04 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:47:04.606 238267 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 11:47:04 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:47:04.606 238267 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 11:47:05 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:47:05.159 238267 DEBUG oslo.privsep.daemon [-] privsep: reply[565984fd-524a-492e-aab8-861881eec6bb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 11:47:05 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:47:05.160 106832 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9d2c33ef-01 in ovnmeta-9d2c33ef-0f52-43b5-80dd-899657aece53 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 23 11:47:05 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:47:05.163 238267 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9d2c33ef-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 23 11:47:05 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:47:05.163 238267 DEBUG oslo.privsep.daemon [-] privsep: reply[4ac766de-5ef7-40be-ac7c-78194b09078c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 11:47:05 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:47:05.166 238267 DEBUG oslo.privsep.daemon [-] privsep: reply[e9d2ba9f-a97d-42f3-8bfd-3a255848a423]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 11:47:05 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:47:05.202 107372 DEBUG oslo.privsep.daemon [-] privsep: reply[aa527d44-07b4-485d-9a5b-80a1a0f4b974]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 11:47:05 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:47:05.228 238267 DEBUG oslo.privsep.daemon [-] privsep: reply[bc787bfe-c78c-4585-b39f-61a3af105d81]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 11:47:05 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:47:05.230 106832 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmp09ss96gv/privsep.sock']
Jan 23 11:47:05 compute-0 nova_compute[185173]: 2026-01-23 11:47:05.235 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:47:05 compute-0 systemd[1]: Starting libvirt proxy daemon...
Jan 23 11:47:05 compute-0 systemd[1]: Started libvirt proxy daemon.
Jan 23 11:47:05 compute-0 nova_compute[185173]: 2026-01-23 11:47:05.701 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:47:05 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:47:05.942 106832 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Jan 23 11:47:05 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:47:05.944 106832 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp09ss96gv/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Jan 23 11:47:05 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:47:05.814 238300 INFO oslo.privsep.daemon [-] privsep daemon starting
Jan 23 11:47:05 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:47:05.818 238300 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Jan 23 11:47:05 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:47:05.821 238300 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Jan 23 11:47:05 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:47:05.821 238300 INFO oslo.privsep.daemon [-] privsep daemon running as pid 238300
Jan 23 11:47:05 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:47:05.948 238300 DEBUG oslo.privsep.daemon [-] privsep: reply[29b70964-acb5-4104-93d0-f98bfa700af8]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 11:47:06 compute-0 nova_compute[185173]: 2026-01-23 11:47:06.236 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:47:06 compute-0 nova_compute[185173]: 2026-01-23 11:47:06.236 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:47:06 compute-0 nova_compute[185173]: 2026-01-23 11:47:06.379 185177 DEBUG nova.compute.manager [req-fd71a25f-76e2-45ad-85ac-5528b54de6ab req-8d744307-0b3b-4104-9ef9-5645b31ba789 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] [instance: 55846fbf-a87a-4cba-be0b-23125d3d9ef4] Received event network-vif-plugged-4c18896b-ecf0-4d1b-b901-f24edce45c11 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 23 11:47:06 compute-0 nova_compute[185173]: 2026-01-23 11:47:06.379 185177 DEBUG oslo_concurrency.lockutils [req-fd71a25f-76e2-45ad-85ac-5528b54de6ab req-8d744307-0b3b-4104-9ef9-5645b31ba789 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] Acquiring lock "55846fbf-a87a-4cba-be0b-23125d3d9ef4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 11:47:06 compute-0 nova_compute[185173]: 2026-01-23 11:47:06.380 185177 DEBUG oslo_concurrency.lockutils [req-fd71a25f-76e2-45ad-85ac-5528b54de6ab req-8d744307-0b3b-4104-9ef9-5645b31ba789 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] Lock "55846fbf-a87a-4cba-be0b-23125d3d9ef4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 11:47:06 compute-0 nova_compute[185173]: 2026-01-23 11:47:06.380 185177 DEBUG oslo_concurrency.lockutils [req-fd71a25f-76e2-45ad-85ac-5528b54de6ab req-8d744307-0b3b-4104-9ef9-5645b31ba789 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] Lock "55846fbf-a87a-4cba-be0b-23125d3d9ef4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 11:47:06 compute-0 nova_compute[185173]: 2026-01-23 11:47:06.381 185177 DEBUG nova.compute.manager [req-fd71a25f-76e2-45ad-85ac-5528b54de6ab req-8d744307-0b3b-4104-9ef9-5645b31ba789 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] [instance: 55846fbf-a87a-4cba-be0b-23125d3d9ef4] No waiting events found dispatching network-vif-plugged-4c18896b-ecf0-4d1b-b901-f24edce45c11 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 23 11:47:06 compute-0 nova_compute[185173]: 2026-01-23 11:47:06.381 185177 WARNING nova.compute.manager [req-fd71a25f-76e2-45ad-85ac-5528b54de6ab req-8d744307-0b3b-4104-9ef9-5645b31ba789 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] [instance: 55846fbf-a87a-4cba-be0b-23125d3d9ef4] Received unexpected event network-vif-plugged-4c18896b-ecf0-4d1b-b901-f24edce45c11 for instance with vm_state active and task_state None.
Jan 23 11:47:06 compute-0 nova_compute[185173]: 2026-01-23 11:47:06.389 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:47:06 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:47:06.437 238300 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 11:47:06 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:47:06.437 238300 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 11:47:06 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:47:06.437 238300 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 11:47:07 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:47:07.030 238300 DEBUG oslo.privsep.daemon [-] privsep: reply[09943da3-f016-4df1-948b-41795fff827a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 11:47:07 compute-0 NetworkManager[56133]: <info>  [1769168827.0633] manager: (tap9d2c33ef-00): new Veth device (/org/freedesktop/NetworkManager/Devices/21)
Jan 23 11:47:07 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:47:07.062 238267 DEBUG oslo.privsep.daemon [-] privsep: reply[637733de-0f00-48f7-a09b-6232ee71862b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 11:47:07 compute-0 systemd-udevd[238312]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 11:47:07 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:47:07.099 238300 DEBUG oslo.privsep.daemon [-] privsep: reply[f9b0676e-9b06-4b07-9a77-335922fd0899]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 11:47:07 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:47:07.107 238300 DEBUG oslo.privsep.daemon [-] privsep: reply[c578aa06-96de-4545-afed-027a304252a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 11:47:07 compute-0 NetworkManager[56133]: <info>  [1769168827.1322] device (tap9d2c33ef-00): carrier: link connected
Jan 23 11:47:07 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:47:07.138 238300 DEBUG oslo.privsep.daemon [-] privsep: reply[6d76b63a-b506-487e-8c9b-730c2015aa19]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 11:47:07 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:47:07.157 238267 DEBUG oslo.privsep.daemon [-] privsep: reply[50c5dc66-f0ab-4496-aed4-bed09505b699]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9d2c33ef-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5b:a6:26'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 374776, 'reachable_time': 16637, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 238330, 'error': None, 'target': 'ovnmeta-9d2c33ef-0f52-43b5-80dd-899657aece53', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 11:47:07 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:47:07.174 238267 DEBUG oslo.privsep.daemon [-] privsep: reply[7746d331-515b-41c4-9f4d-e853f5d30f21]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe5b:a626'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 374776, 'tstamp': 374776}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 238331, 'error': None, 'target': 'ovnmeta-9d2c33ef-0f52-43b5-80dd-899657aece53', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 11:47:07 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:47:07.189 238267 DEBUG oslo.privsep.daemon [-] privsep: reply[92496320-afe6-44f8-b8ba-19867493a3af]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9d2c33ef-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5b:a6:26'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 374776, 'reachable_time': 16637, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 238332, 'error': None, 'target': 'ovnmeta-9d2c33ef-0f52-43b5-80dd-899657aece53', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 11:47:07 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:47:07.220 238267 DEBUG oslo.privsep.daemon [-] privsep: reply[5e443e33-4725-4eed-82f7-92baf63491fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 11:47:07 compute-0 nova_compute[185173]: 2026-01-23 11:47:07.230 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:47:07 compute-0 nova_compute[185173]: 2026-01-23 11:47:07.234 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:47:07 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:47:07.273 238267 DEBUG oslo.privsep.daemon [-] privsep: reply[94428b1d-96ee-44de-8980-dddc58f10150]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 11:47:07 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:47:07.275 106832 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9d2c33ef-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 23 11:47:07 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:47:07.275 106832 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 23 11:47:07 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:47:07.276 106832 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9d2c33ef-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 23 11:47:07 compute-0 NetworkManager[56133]: <info>  [1769168827.2783] manager: (tap9d2c33ef-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/22)
Jan 23 11:47:07 compute-0 kernel: tap9d2c33ef-00: entered promiscuous mode
Jan 23 11:47:07 compute-0 nova_compute[185173]: 2026-01-23 11:47:07.277 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:47:07 compute-0 nova_compute[185173]: 2026-01-23 11:47:07.281 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:47:07 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:47:07.282 106832 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9d2c33ef-00, col_values=(('external_ids', {'iface-id': 'a3c84d66-2ae2-461a-92f2-b9999c7b469e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 23 11:47:07 compute-0 ovn_controller[97581]: 2026-01-23T11:47:07Z|00031|binding|INFO|Releasing lport a3c84d66-2ae2-461a-92f2-b9999c7b469e from this chassis (sb_readonly=0)
Jan 23 11:47:07 compute-0 nova_compute[185173]: 2026-01-23 11:47:07.283 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:47:07 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:47:07.300 106832 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9d2c33ef-0f52-43b5-80dd-899657aece53.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9d2c33ef-0f52-43b5-80dd-899657aece53.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 23 11:47:07 compute-0 nova_compute[185173]: 2026-01-23 11:47:07.299 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:47:07 compute-0 nova_compute[185173]: 2026-01-23 11:47:07.301 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:47:07 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:47:07.302 238267 DEBUG oslo.privsep.daemon [-] privsep: reply[d6eaa6a1-9e3c-4951-858d-ed9116de24dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 11:47:07 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:47:07.303 106832 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 23 11:47:07 compute-0 ovn_metadata_agent[106827]: global
Jan 23 11:47:07 compute-0 ovn_metadata_agent[106827]:     log         /dev/log local0 debug
Jan 23 11:47:07 compute-0 ovn_metadata_agent[106827]:     log-tag     haproxy-metadata-proxy-9d2c33ef-0f52-43b5-80dd-899657aece53
Jan 23 11:47:07 compute-0 ovn_metadata_agent[106827]:     user        root
Jan 23 11:47:07 compute-0 ovn_metadata_agent[106827]:     group       root
Jan 23 11:47:07 compute-0 ovn_metadata_agent[106827]:     maxconn     1024
Jan 23 11:47:07 compute-0 ovn_metadata_agent[106827]:     pidfile     /var/lib/neutron/external/pids/9d2c33ef-0f52-43b5-80dd-899657aece53.pid.haproxy
Jan 23 11:47:07 compute-0 ovn_metadata_agent[106827]:     daemon
Jan 23 11:47:07 compute-0 ovn_metadata_agent[106827]: 
Jan 23 11:47:07 compute-0 ovn_metadata_agent[106827]: defaults
Jan 23 11:47:07 compute-0 ovn_metadata_agent[106827]:     log global
Jan 23 11:47:07 compute-0 ovn_metadata_agent[106827]:     mode http
Jan 23 11:47:07 compute-0 ovn_metadata_agent[106827]:     option httplog
Jan 23 11:47:07 compute-0 ovn_metadata_agent[106827]:     option dontlognull
Jan 23 11:47:07 compute-0 ovn_metadata_agent[106827]:     option http-server-close
Jan 23 11:47:07 compute-0 ovn_metadata_agent[106827]:     option forwardfor
Jan 23 11:47:07 compute-0 ovn_metadata_agent[106827]:     retries                 3
Jan 23 11:47:07 compute-0 ovn_metadata_agent[106827]:     timeout http-request    30s
Jan 23 11:47:07 compute-0 ovn_metadata_agent[106827]:     timeout connect         30s
Jan 23 11:47:07 compute-0 ovn_metadata_agent[106827]:     timeout client          32s
Jan 23 11:47:07 compute-0 ovn_metadata_agent[106827]:     timeout server          32s
Jan 23 11:47:07 compute-0 ovn_metadata_agent[106827]:     timeout http-keep-alive 30s
Jan 23 11:47:07 compute-0 ovn_metadata_agent[106827]: 
Jan 23 11:47:07 compute-0 ovn_metadata_agent[106827]: 
Jan 23 11:47:07 compute-0 ovn_metadata_agent[106827]: listen listener
Jan 23 11:47:07 compute-0 ovn_metadata_agent[106827]:     bind 169.254.169.254:80
Jan 23 11:47:07 compute-0 ovn_metadata_agent[106827]:     server metadata /var/lib/neutron/metadata_proxy
Jan 23 11:47:07 compute-0 ovn_metadata_agent[106827]:     http-request add-header X-OVN-Network-ID 9d2c33ef-0f52-43b5-80dd-899657aece53
Jan 23 11:47:07 compute-0 ovn_metadata_agent[106827]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 23 11:47:07 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:47:07.304 106832 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9d2c33ef-0f52-43b5-80dd-899657aece53', 'env', 'PROCESS_TAG=haproxy-9d2c33ef-0f52-43b5-80dd-899657aece53', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9d2c33ef-0f52-43b5-80dd-899657aece53.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 23 11:47:07 compute-0 podman[238364]: 2026-01-23 11:47:07.710105867 +0000 UTC m=+0.067180256 container create f45044ab43e35afc597abcab4e8e0b9c6c8e2f5afeb70e3fe035697a43e34f1e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9d2c33ef-0f52-43b5-80dd-899657aece53, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 23 11:47:07 compute-0 systemd[1]: Started libpod-conmon-f45044ab43e35afc597abcab4e8e0b9c6c8e2f5afeb70e3fe035697a43e34f1e.scope.
Jan 23 11:47:07 compute-0 podman[238364]: 2026-01-23 11:47:07.678095949 +0000 UTC m=+0.035170358 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 23 11:47:07 compute-0 systemd[1]: Started libcrun container.
Jan 23 11:47:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77f4907b936c862ad4285c3a038757626f8e168a451c7171f9c4b7d30cd8c181/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 23 11:47:07 compute-0 podman[238364]: 2026-01-23 11:47:07.806072002 +0000 UTC m=+0.163146421 container init f45044ab43e35afc597abcab4e8e0b9c6c8e2f5afeb70e3fe035697a43e34f1e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9d2c33ef-0f52-43b5-80dd-899657aece53, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 23 11:47:07 compute-0 podman[238364]: 2026-01-23 11:47:07.814714825 +0000 UTC m=+0.171789214 container start f45044ab43e35afc597abcab4e8e0b9c6c8e2f5afeb70e3fe035697a43e34f1e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9d2c33ef-0f52-43b5-80dd-899657aece53, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Jan 23 11:47:07 compute-0 neutron-haproxy-ovnmeta-9d2c33ef-0f52-43b5-80dd-899657aece53[238378]: [NOTICE]   (238382) : New worker (238384) forked
Jan 23 11:47:07 compute-0 neutron-haproxy-ovnmeta-9d2c33ef-0f52-43b5-80dd-899657aece53[238378]: [NOTICE]   (238382) : Loading success.
Jan 23 11:47:08 compute-0 nova_compute[185173]: 2026-01-23 11:47:08.234 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:47:08 compute-0 nova_compute[185173]: 2026-01-23 11:47:08.261 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 11:47:08 compute-0 nova_compute[185173]: 2026-01-23 11:47:08.262 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 11:47:08 compute-0 nova_compute[185173]: 2026-01-23 11:47:08.263 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 11:47:08 compute-0 nova_compute[185173]: 2026-01-23 11:47:08.264 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 23 11:47:08 compute-0 nova_compute[185173]: 2026-01-23 11:47:08.369 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:47:08 compute-0 nova_compute[185173]: 2026-01-23 11:47:08.431 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:47:08 compute-0 nova_compute[185173]: 2026-01-23 11:47:08.432 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:47:08 compute-0 nova_compute[185173]: 2026-01-23 11:47:08.493 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:47:08 compute-0 nova_compute[185173]: 2026-01-23 11:47:08.494 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:47:08 compute-0 nova_compute[185173]: 2026-01-23 11:47:08.553 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.eph0 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:47:08 compute-0 nova_compute[185173]: 2026-01-23 11:47:08.555 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:47:08 compute-0 nova_compute[185173]: 2026-01-23 11:47:08.616 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.eph0 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:47:08 compute-0 nova_compute[185173]: 2026-01-23 11:47:08.953 185177 WARNING nova.virt.libvirt.driver [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 23 11:47:08 compute-0 nova_compute[185173]: 2026-01-23 11:47:08.955 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5335MB free_disk=72.44538116455078GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 23 11:47:08 compute-0 nova_compute[185173]: 2026-01-23 11:47:08.956 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 11:47:08 compute-0 nova_compute[185173]: 2026-01-23 11:47:08.956 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 11:47:09 compute-0 nova_compute[185173]: 2026-01-23 11:47:09.052 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Instance 55846fbf-a87a-4cba-be0b-23125d3d9ef4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 23 11:47:09 compute-0 nova_compute[185173]: 2026-01-23 11:47:09.053 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 23 11:47:09 compute-0 nova_compute[185173]: 2026-01-23 11:47:09.053 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 23 11:47:09 compute-0 nova_compute[185173]: 2026-01-23 11:47:09.090 185177 DEBUG nova.compute.provider_tree [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Updating inventory in ProviderTree for provider 77dd020c-2f5c-40b0-b660-8a95a28aabbd with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 23 11:47:09 compute-0 nova_compute[185173]: 2026-01-23 11:47:09.126 185177 ERROR nova.scheduler.client.report [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] [req-483fa36d-0efd-4941-9c72-129ac026248e] Failed to update inventory to [{'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}}] for resource provider with UUID 77dd020c-2f5c-40b0-b660-8a95a28aabbd.  Got 409: {"errors": [{"status": 409, "title": "Conflict", "detail": "There was a conflict when trying to complete your request.\n\n resource provider generation conflict  ", "code": "placement.concurrent_update", "request_id": "req-483fa36d-0efd-4941-9c72-129ac026248e"}]}
Jan 23 11:47:09 compute-0 nova_compute[185173]: 2026-01-23 11:47:09.140 185177 DEBUG nova.scheduler.client.report [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Refreshing inventories for resource provider 77dd020c-2f5c-40b0-b660-8a95a28aabbd _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 23 11:47:09 compute-0 nova_compute[185173]: 2026-01-23 11:47:09.162 185177 DEBUG nova.scheduler.client.report [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Updating ProviderTree inventory for provider 77dd020c-2f5c-40b0-b660-8a95a28aabbd from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 23 11:47:09 compute-0 nova_compute[185173]: 2026-01-23 11:47:09.162 185177 DEBUG nova.compute.provider_tree [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Updating inventory in ProviderTree for provider 77dd020c-2f5c-40b0-b660-8a95a28aabbd with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 23 11:47:09 compute-0 nova_compute[185173]: 2026-01-23 11:47:09.177 185177 DEBUG nova.scheduler.client.report [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Refreshing aggregate associations for resource provider 77dd020c-2f5c-40b0-b660-8a95a28aabbd, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 23 11:47:09 compute-0 nova_compute[185173]: 2026-01-23 11:47:09.201 185177 DEBUG nova.scheduler.client.report [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Refreshing trait associations for resource provider 77dd020c-2f5c-40b0-b660-8a95a28aabbd, traits: HW_CPU_X86_F16C,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_IDE,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_TRUSTED_CERTS,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_BMI,HW_CPU_X86_FMA3,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE42,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_ABM,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_AVX2,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_AVX,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SSE,HW_CPU_X86_BMI2,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE4A,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_MMX,HW_CPU_X86_SSE41,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_USB _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 23 11:47:09 compute-0 nova_compute[185173]: 2026-01-23 11:47:09.242 185177 DEBUG nova.compute.provider_tree [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Updating inventory in ProviderTree for provider 77dd020c-2f5c-40b0-b660-8a95a28aabbd with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 23 11:47:09 compute-0 nova_compute[185173]: 2026-01-23 11:47:09.293 185177 DEBUG nova.scheduler.client.report [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Updated inventory for provider 77dd020c-2f5c-40b0-b660-8a95a28aabbd with generation 3 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Jan 23 11:47:09 compute-0 nova_compute[185173]: 2026-01-23 11:47:09.294 185177 DEBUG nova.compute.provider_tree [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Updating resource provider 77dd020c-2f5c-40b0-b660-8a95a28aabbd generation from 3 to 4 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Jan 23 11:47:09 compute-0 nova_compute[185173]: 2026-01-23 11:47:09.294 185177 DEBUG nova.compute.provider_tree [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Updating inventory in ProviderTree for provider 77dd020c-2f5c-40b0-b660-8a95a28aabbd with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 23 11:47:09 compute-0 nova_compute[185173]: 2026-01-23 11:47:09.319 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 23 11:47:09 compute-0 nova_compute[185173]: 2026-01-23 11:47:09.320 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.363s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 11:47:10 compute-0 nova_compute[185173]: 2026-01-23 11:47:10.321 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:47:10 compute-0 nova_compute[185173]: 2026-01-23 11:47:10.323 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 23 11:47:10 compute-0 nova_compute[185173]: 2026-01-23 11:47:10.323 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 23 11:47:10 compute-0 nova_compute[185173]: 2026-01-23 11:47:10.584 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Acquiring lock "refresh_cache-55846fbf-a87a-4cba-be0b-23125d3d9ef4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 23 11:47:10 compute-0 nova_compute[185173]: 2026-01-23 11:47:10.585 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Acquired lock "refresh_cache-55846fbf-a87a-4cba-be0b-23125d3d9ef4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 23 11:47:10 compute-0 nova_compute[185173]: 2026-01-23 11:47:10.586 185177 DEBUG nova.network.neutron [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] [instance: 55846fbf-a87a-4cba-be0b-23125d3d9ef4] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 23 11:47:10 compute-0 nova_compute[185173]: 2026-01-23 11:47:10.586 185177 DEBUG nova.objects.instance [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 55846fbf-a87a-4cba-be0b-23125d3d9ef4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 23 11:47:10 compute-0 nova_compute[185173]: 2026-01-23 11:47:10.704 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:47:11 compute-0 nova_compute[185173]: 2026-01-23 11:47:11.391 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:47:11 compute-0 podman[238407]: 2026-01-23 11:47:11.731966848 +0000 UTC m=+0.059242121 container health_status 6ec039018dddd109dd56b3f3912ce4a80c166b5fb98c417c5e3cfbbdfbfbeaad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260120, org.label-schema.vendor=CentOS, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, tcib_managed=true)
Jan 23 11:47:11 compute-0 podman[238406]: 2026-01-23 11:47:11.762179942 +0000 UTC m=+0.092790627 container health_status 48bfd3e93cfb033a8917f154ab637a84f3f60f7609564292c230ce848bae7693 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 23 11:47:13 compute-0 nova_compute[185173]: 2026-01-23 11:47:13.540 185177 DEBUG nova.network.neutron [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] [instance: 55846fbf-a87a-4cba-be0b-23125d3d9ef4] Updating instance_info_cache with network_info: [{"id": "4c18896b-ecf0-4d1b-b901-f24edce45c11", "address": "fa:16:3e:e4:21:a1", "network": {"id": "9d2c33ef-0f52-43b5-80dd-899657aece53", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.65", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bd16a0de2f5e4a8480a855ef0e1a3f14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c18896b-ec", "ovs_interfaceid": "4c18896b-ecf0-4d1b-b901-f24edce45c11", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 23 11:47:13 compute-0 nova_compute[185173]: 2026-01-23 11:47:13.565 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Releasing lock "refresh_cache-55846fbf-a87a-4cba-be0b-23125d3d9ef4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 23 11:47:13 compute-0 nova_compute[185173]: 2026-01-23 11:47:13.566 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] [instance: 55846fbf-a87a-4cba-be0b-23125d3d9ef4] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 23 11:47:13 compute-0 nova_compute[185173]: 2026-01-23 11:47:13.566 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:47:13 compute-0 nova_compute[185173]: 2026-01-23 11:47:13.567 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:47:13 compute-0 nova_compute[185173]: 2026-01-23 11:47:13.567 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 23 11:47:13 compute-0 podman[238450]: 2026-01-23 11:47:13.721195697 +0000 UTC m=+0.055936329 container health_status d96827cd9c29e53bbdf4cef10942608e4ba405294733072b4aa624c0238e2ed8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 23 11:47:15 compute-0 nova_compute[185173]: 2026-01-23 11:47:15.707 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:47:16 compute-0 nova_compute[185173]: 2026-01-23 11:47:16.393 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:47:16 compute-0 podman[238470]: 2026-01-23 11:47:16.766011531 +0000 UTC m=+0.096164431 container health_status 1cc877fed4914980324cf4c0d6ba23743fd113442cee4d49cc1a59e402757170 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 23 11:47:20 compute-0 nova_compute[185173]: 2026-01-23 11:47:20.710 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:47:21 compute-0 nova_compute[185173]: 2026-01-23 11:47:21.395 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:47:22 compute-0 podman[238498]: 2026-01-23 11:47:22.771190395 +0000 UTC m=+0.102647530 container health_status adf529ba1b6aae11f18bcfacdd7f5850af0b6e6af2250d4a705be9c346f3f5af (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 23 11:47:23 compute-0 NetworkManager[56133]: <info>  [1769168843.4320] manager: (patch-br-int-to-provnet-1ca53fac-c1a6-45fe-a18c-749fb9a851a1): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/23)
Jan 23 11:47:23 compute-0 NetworkManager[56133]: <info>  [1769168843.4333] device (patch-br-int-to-provnet-1ca53fac-c1a6-45fe-a18c-749fb9a851a1)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 23 11:47:23 compute-0 NetworkManager[56133]: <warn>  [1769168843.4337] device (patch-br-int-to-provnet-1ca53fac-c1a6-45fe-a18c-749fb9a851a1)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 23 11:47:23 compute-0 ovn_controller[97581]: 2026-01-23T11:47:23Z|00032|binding|INFO|Releasing lport a3c84d66-2ae2-461a-92f2-b9999c7b469e from this chassis (sb_readonly=0)
Jan 23 11:47:23 compute-0 nova_compute[185173]: 2026-01-23 11:47:23.430 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:47:23 compute-0 NetworkManager[56133]: <info>  [1769168843.4353] manager: (patch-provnet-1ca53fac-c1a6-45fe-a18c-749fb9a851a1-to-br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/24)
Jan 23 11:47:23 compute-0 NetworkManager[56133]: <info>  [1769168843.4359] device (patch-provnet-1ca53fac-c1a6-45fe-a18c-749fb9a851a1-to-br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 23 11:47:23 compute-0 NetworkManager[56133]: <warn>  [1769168843.4360] device (patch-provnet-1ca53fac-c1a6-45fe-a18c-749fb9a851a1-to-br-int)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 23 11:47:23 compute-0 NetworkManager[56133]: <info>  [1769168843.4375] manager: (patch-br-int-to-provnet-1ca53fac-c1a6-45fe-a18c-749fb9a851a1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/25)
Jan 23 11:47:23 compute-0 NetworkManager[56133]: <info>  [1769168843.4385] manager: (patch-provnet-1ca53fac-c1a6-45fe-a18c-749fb9a851a1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/26)
Jan 23 11:47:23 compute-0 NetworkManager[56133]: <info>  [1769168843.4391] device (patch-br-int-to-provnet-1ca53fac-c1a6-45fe-a18c-749fb9a851a1)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Jan 23 11:47:23 compute-0 NetworkManager[56133]: <info>  [1769168843.4398] device (patch-provnet-1ca53fac-c1a6-45fe-a18c-749fb9a851a1-to-br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Jan 23 11:47:23 compute-0 nova_compute[185173]: 2026-01-23 11:47:23.474 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:47:23 compute-0 ovn_controller[97581]: 2026-01-23T11:47:23Z|00033|binding|INFO|Releasing lport a3c84d66-2ae2-461a-92f2-b9999c7b469e from this chassis (sb_readonly=0)
Jan 23 11:47:23 compute-0 nova_compute[185173]: 2026-01-23 11:47:23.483 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:47:24 compute-0 nova_compute[185173]: 2026-01-23 11:47:24.008 185177 DEBUG nova.compute.manager [req-c9c9ae81-35b0-4528-ac64-ffa4849ea9e4 req-1f65a122-e414-4cfa-93d6-774dce1704b7 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] [instance: 55846fbf-a87a-4cba-be0b-23125d3d9ef4] Received event network-changed-4c18896b-ecf0-4d1b-b901-f24edce45c11 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 23 11:47:24 compute-0 nova_compute[185173]: 2026-01-23 11:47:24.009 185177 DEBUG nova.compute.manager [req-c9c9ae81-35b0-4528-ac64-ffa4849ea9e4 req-1f65a122-e414-4cfa-93d6-774dce1704b7 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] [instance: 55846fbf-a87a-4cba-be0b-23125d3d9ef4] Refreshing instance network info cache due to event network-changed-4c18896b-ecf0-4d1b-b901-f24edce45c11. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 23 11:47:24 compute-0 nova_compute[185173]: 2026-01-23 11:47:24.009 185177 DEBUG oslo_concurrency.lockutils [req-c9c9ae81-35b0-4528-ac64-ffa4849ea9e4 req-1f65a122-e414-4cfa-93d6-774dce1704b7 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] Acquiring lock "refresh_cache-55846fbf-a87a-4cba-be0b-23125d3d9ef4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 23 11:47:24 compute-0 nova_compute[185173]: 2026-01-23 11:47:24.009 185177 DEBUG oslo_concurrency.lockutils [req-c9c9ae81-35b0-4528-ac64-ffa4849ea9e4 req-1f65a122-e414-4cfa-93d6-774dce1704b7 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] Acquired lock "refresh_cache-55846fbf-a87a-4cba-be0b-23125d3d9ef4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 23 11:47:24 compute-0 nova_compute[185173]: 2026-01-23 11:47:24.009 185177 DEBUG nova.network.neutron [req-c9c9ae81-35b0-4528-ac64-ffa4849ea9e4 req-1f65a122-e414-4cfa-93d6-774dce1704b7 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] [instance: 55846fbf-a87a-4cba-be0b-23125d3d9ef4] Refreshing network info cache for port 4c18896b-ecf0-4d1b-b901-f24edce45c11 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 23 11:47:24 compute-0 podman[238520]: 2026-01-23 11:47:24.74845422 +0000 UTC m=+0.081117920 container health_status 900ef841977ab427bb05b895d10e0cac749b9185cccc7bb7aaf2b3886aa6449a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., name=ubi9, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.expose-services=, io.openshift.tags=base rhel9, io.buildah.version=1.29.0, config_id=kepler, release=1214.1726694543, release-0.7.12=, io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, container_name=kepler, vcs-type=git, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, version=9.4, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Jan 23 11:47:25 compute-0 nova_compute[185173]: 2026-01-23 11:47:25.713 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:47:25 compute-0 nova_compute[185173]: 2026-01-23 11:47:25.935 185177 DEBUG nova.network.neutron [req-c9c9ae81-35b0-4528-ac64-ffa4849ea9e4 req-1f65a122-e414-4cfa-93d6-774dce1704b7 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] [instance: 55846fbf-a87a-4cba-be0b-23125d3d9ef4] Updated VIF entry in instance network info cache for port 4c18896b-ecf0-4d1b-b901-f24edce45c11. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 23 11:47:25 compute-0 nova_compute[185173]: 2026-01-23 11:47:25.936 185177 DEBUG nova.network.neutron [req-c9c9ae81-35b0-4528-ac64-ffa4849ea9e4 req-1f65a122-e414-4cfa-93d6-774dce1704b7 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] [instance: 55846fbf-a87a-4cba-be0b-23125d3d9ef4] Updating instance_info_cache with network_info: [{"id": "4c18896b-ecf0-4d1b-b901-f24edce45c11", "address": "fa:16:3e:e4:21:a1", "network": {"id": "9d2c33ef-0f52-43b5-80dd-899657aece53", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.65", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bd16a0de2f5e4a8480a855ef0e1a3f14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c18896b-ec", "ovs_interfaceid": "4c18896b-ecf0-4d1b-b901-f24edce45c11", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 23 11:47:25 compute-0 nova_compute[185173]: 2026-01-23 11:47:25.968 185177 DEBUG oslo_concurrency.lockutils [req-c9c9ae81-35b0-4528-ac64-ffa4849ea9e4 req-1f65a122-e414-4cfa-93d6-774dce1704b7 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] Releasing lock "refresh_cache-55846fbf-a87a-4cba-be0b-23125d3d9ef4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 23 11:47:26 compute-0 nova_compute[185173]: 2026-01-23 11:47:26.398 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:47:28 compute-0 podman[238537]: 2026-01-23 11:47:28.78937655 +0000 UTC m=+0.119422164 container health_status 99ee297e6e25b500e7af118e58bbafc761d2fd7202cdfcf4c976c2a99866b5ef (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 23 11:47:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:47:29.091 106832 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 11:47:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:47:29.092 106832 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 11:47:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:47:29.093 106832 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 11:47:29 compute-0 podman[201022]: time="2026-01-23T11:47:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 23 11:47:29 compute-0 podman[201022]: @ - - [23/Jan/2026:11:47:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28508 "" "Go-http-client/1.1"
Jan 23 11:47:29 compute-0 podman[201022]: @ - - [23/Jan/2026:11:47:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4340 "" "Go-http-client/1.1"
Jan 23 11:47:30 compute-0 nova_compute[185173]: 2026-01-23 11:47:30.716 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:47:31 compute-0 nova_compute[185173]: 2026-01-23 11:47:31.402 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:47:31 compute-0 openstack_network_exporter[204160]: ERROR   11:47:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 23 11:47:31 compute-0 openstack_network_exporter[204160]: 
Jan 23 11:47:31 compute-0 openstack_network_exporter[204160]: ERROR   11:47:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 23 11:47:31 compute-0 openstack_network_exporter[204160]: 
Jan 23 11:47:33 compute-0 podman[238559]: 2026-01-23 11:47:33.737031895 +0000 UTC m=+0.069192326 container health_status cde20f10ae383cce1365a41265bac0a75ea71c31a21a1539f187bef9d678e8d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., version=9.6, release=1755695350, distribution-scope=public, io.openshift.tags=minimal rhel9, architecture=x86_64, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, config_id=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Jan 23 11:47:35 compute-0 nova_compute[185173]: 2026-01-23 11:47:35.720 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:47:36 compute-0 nova_compute[185173]: 2026-01-23 11:47:36.404 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:47:37 compute-0 ovn_controller[97581]: 2026-01-23T11:47:37Z|00004|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e4:21:a1 192.168.0.65
Jan 23 11:47:37 compute-0 ovn_controller[97581]: 2026-01-23T11:47:37Z|00005|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e4:21:a1 192.168.0.65
Jan 23 11:47:40 compute-0 nova_compute[185173]: 2026-01-23 11:47:40.722 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:47:41 compute-0 nova_compute[185173]: 2026-01-23 11:47:41.407 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:47:42 compute-0 podman[238586]: 2026-01-23 11:47:42.778927889 +0000 UTC m=+0.091397333 container health_status 48bfd3e93cfb033a8917f154ab637a84f3f60f7609564292c230ce848bae7693 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 23 11:47:42 compute-0 podman[238587]: 2026-01-23 11:47:42.787375027 +0000 UTC m=+0.106314220 container health_status 6ec039018dddd109dd56b3f3912ce4a80c166b5fb98c417c5e3cfbbdfbfbeaad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260120, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4)
Jan 23 11:47:44 compute-0 podman[238627]: 2026-01-23 11:47:44.776856972 +0000 UTC m=+0.111875498 container health_status d96827cd9c29e53bbdf4cef10942608e4ba405294733072b4aa624c0238e2ed8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 11:47:45 compute-0 nova_compute[185173]: 2026-01-23 11:47:45.725 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:47:46 compute-0 nova_compute[185173]: 2026-01-23 11:47:46.409 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:47:47 compute-0 podman[238645]: 2026-01-23 11:47:47.816076545 +0000 UTC m=+0.140210586 container health_status 1cc877fed4914980324cf4c0d6ba23743fd113442cee4d49cc1a59e402757170 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 23 11:47:50 compute-0 nova_compute[185173]: 2026-01-23 11:47:50.728 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:47:51 compute-0 nova_compute[185173]: 2026-01-23 11:47:51.411 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:47:53 compute-0 ovn_controller[97581]: 2026-01-23T11:47:53Z|00034|memory_trim|INFO|Detected inactivity (last active 30006 ms ago): trimming memory
Jan 23 11:47:53 compute-0 podman[238673]: 2026-01-23 11:47:53.75034302 +0000 UTC m=+0.085097128 container health_status adf529ba1b6aae11f18bcfacdd7f5850af0b6e6af2250d4a705be9c346f3f5af (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3)
Jan 23 11:47:55 compute-0 nova_compute[185173]: 2026-01-23 11:47:55.731 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:47:55 compute-0 podman[238692]: 2026-01-23 11:47:55.764339421 +0000 UTC m=+0.082899544 container health_status 900ef841977ab427bb05b895d10e0cac749b9185cccc7bb7aaf2b3886aa6449a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release-0.7.12=, build-date=2024-09-18T21:23:30, version=9.4, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., name=ubi9, vcs-type=git, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.expose-services=, release=1214.1726694543, architecture=x86_64, config_id=kepler, io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.buildah.version=1.29.0, managed_by=edpm_ansible)
Jan 23 11:47:56 compute-0 nova_compute[185173]: 2026-01-23 11:47:56.414 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:47:59 compute-0 podman[238712]: 2026-01-23 11:47:59.733639575 +0000 UTC m=+0.066298985 container health_status 99ee297e6e25b500e7af118e58bbafc761d2fd7202cdfcf4c976c2a99866b5ef (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 23 11:47:59 compute-0 podman[201022]: time="2026-01-23T11:47:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 23 11:47:59 compute-0 podman[201022]: @ - - [23/Jan/2026:11:47:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28508 "" "Go-http-client/1.1"
Jan 23 11:47:59 compute-0 podman[201022]: @ - - [23/Jan/2026:11:47:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4345 "" "Go-http-client/1.1"
Jan 23 11:48:00 compute-0 nova_compute[185173]: 2026-01-23 11:48:00.734 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:48:01 compute-0 nova_compute[185173]: 2026-01-23 11:48:01.236 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:48:01 compute-0 openstack_network_exporter[204160]: ERROR   11:48:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 23 11:48:01 compute-0 openstack_network_exporter[204160]: 
Jan 23 11:48:01 compute-0 openstack_network_exporter[204160]: ERROR   11:48:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 23 11:48:01 compute-0 openstack_network_exporter[204160]: 
Jan 23 11:48:01 compute-0 nova_compute[185173]: 2026-01-23 11:48:01.415 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:48:04 compute-0 podman[238736]: 2026-01-23 11:48:04.768604841 +0000 UTC m=+0.087502607 container health_status cde20f10ae383cce1365a41265bac0a75ea71c31a21a1539f187bef9d678e8d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, container_name=openstack_network_exporter, distribution-scope=public, name=ubi9-minimal, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, maintainer=Red Hat, Inc., version=9.6, managed_by=edpm_ansible, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Jan 23 11:48:05 compute-0 nova_compute[185173]: 2026-01-23 11:48:05.736 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:48:06 compute-0 nova_compute[185173]: 2026-01-23 11:48:06.259 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:48:06 compute-0 nova_compute[185173]: 2026-01-23 11:48:06.418 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:48:07 compute-0 nova_compute[185173]: 2026-01-23 11:48:07.236 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:48:07 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:48:07.865 106832 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:21:44', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '86:2e:09:c4:2a:53'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 23 11:48:07 compute-0 nova_compute[185173]: 2026-01-23 11:48:07.866 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:48:07 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:48:07.866 106832 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 23 11:48:08 compute-0 nova_compute[185173]: 2026-01-23 11:48:08.231 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:48:08 compute-0 nova_compute[185173]: 2026-01-23 11:48:08.235 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:48:08 compute-0 nova_compute[185173]: 2026-01-23 11:48:08.235 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:48:08 compute-0 nova_compute[185173]: 2026-01-23 11:48:08.236 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:48:08 compute-0 nova_compute[185173]: 2026-01-23 11:48:08.237 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 23 11:48:08 compute-0 nova_compute[185173]: 2026-01-23 11:48:08.416 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:48:08 compute-0 nova_compute[185173]: 2026-01-23 11:48:08.440 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Triggering sync for uuid 55846fbf-a87a-4cba-be0b-23125d3d9ef4 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Jan 23 11:48:08 compute-0 nova_compute[185173]: 2026-01-23 11:48:08.441 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Acquiring lock "55846fbf-a87a-4cba-be0b-23125d3d9ef4" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 11:48:08 compute-0 nova_compute[185173]: 2026-01-23 11:48:08.441 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "55846fbf-a87a-4cba-be0b-23125d3d9ef4" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 11:48:08 compute-0 nova_compute[185173]: 2026-01-23 11:48:08.487 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "55846fbf-a87a-4cba-be0b-23125d3d9ef4" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.046s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 11:48:09 compute-0 nova_compute[185173]: 2026-01-23 11:48:09.235 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:48:09 compute-0 nova_compute[185173]: 2026-01-23 11:48:09.264 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 11:48:09 compute-0 nova_compute[185173]: 2026-01-23 11:48:09.265 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 11:48:09 compute-0 nova_compute[185173]: 2026-01-23 11:48:09.266 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 11:48:09 compute-0 nova_compute[185173]: 2026-01-23 11:48:09.267 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 23 11:48:09 compute-0 nova_compute[185173]: 2026-01-23 11:48:09.373 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:48:09 compute-0 nova_compute[185173]: 2026-01-23 11:48:09.432 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:48:09 compute-0 nova_compute[185173]: 2026-01-23 11:48:09.433 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:48:09 compute-0 nova_compute[185173]: 2026-01-23 11:48:09.491 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:48:09 compute-0 nova_compute[185173]: 2026-01-23 11:48:09.492 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:48:09 compute-0 nova_compute[185173]: 2026-01-23 11:48:09.551 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.eph0 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:48:09 compute-0 nova_compute[185173]: 2026-01-23 11:48:09.553 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:48:09 compute-0 nova_compute[185173]: 2026-01-23 11:48:09.615 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.eph0 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:48:09 compute-0 nova_compute[185173]: 2026-01-23 11:48:09.979 185177 WARNING nova.virt.libvirt.driver [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 23 11:48:09 compute-0 nova_compute[185173]: 2026-01-23 11:48:09.980 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5214MB free_disk=72.42521667480469GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 23 11:48:09 compute-0 nova_compute[185173]: 2026-01-23 11:48:09.981 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 11:48:09 compute-0 nova_compute[185173]: 2026-01-23 11:48:09.981 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 11:48:10 compute-0 nova_compute[185173]: 2026-01-23 11:48:10.053 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Instance 55846fbf-a87a-4cba-be0b-23125d3d9ef4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 23 11:48:10 compute-0 nova_compute[185173]: 2026-01-23 11:48:10.054 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 23 11:48:10 compute-0 nova_compute[185173]: 2026-01-23 11:48:10.054 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 23 11:48:10 compute-0 nova_compute[185173]: 2026-01-23 11:48:10.405 185177 DEBUG nova.compute.provider_tree [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Inventory has not changed in ProviderTree for provider: 77dd020c-2f5c-40b0-b660-8a95a28aabbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 23 11:48:10 compute-0 nova_compute[185173]: 2026-01-23 11:48:10.422 185177 DEBUG nova.scheduler.client.report [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Inventory has not changed for provider 77dd020c-2f5c-40b0-b660-8a95a28aabbd based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 23 11:48:10 compute-0 nova_compute[185173]: 2026-01-23 11:48:10.424 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 23 11:48:10 compute-0 nova_compute[185173]: 2026-01-23 11:48:10.424 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.443s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 11:48:10 compute-0 nova_compute[185173]: 2026-01-23 11:48:10.425 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:48:10 compute-0 nova_compute[185173]: 2026-01-23 11:48:10.425 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 23 11:48:10 compute-0 nova_compute[185173]: 2026-01-23 11:48:10.437 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 23 11:48:10 compute-0 nova_compute[185173]: 2026-01-23 11:48:10.738 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:48:10 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:48:10.868 106832 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9a136bfd-345f-428f-a7f6-d55531120214, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 23 11:48:11 compute-0 nova_compute[185173]: 2026-01-23 11:48:11.421 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:48:11 compute-0 nova_compute[185173]: 2026-01-23 11:48:11.432 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:48:11 compute-0 nova_compute[185173]: 2026-01-23 11:48:11.454 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:48:11 compute-0 nova_compute[185173]: 2026-01-23 11:48:11.454 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 23 11:48:11 compute-0 nova_compute[185173]: 2026-01-23 11:48:11.455 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 23 11:48:11 compute-0 nova_compute[185173]: 2026-01-23 11:48:11.640 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Acquiring lock "refresh_cache-55846fbf-a87a-4cba-be0b-23125d3d9ef4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 23 11:48:11 compute-0 nova_compute[185173]: 2026-01-23 11:48:11.640 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Acquired lock "refresh_cache-55846fbf-a87a-4cba-be0b-23125d3d9ef4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 23 11:48:11 compute-0 nova_compute[185173]: 2026-01-23 11:48:11.640 185177 DEBUG nova.network.neutron [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] [instance: 55846fbf-a87a-4cba-be0b-23125d3d9ef4] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 23 11:48:11 compute-0 nova_compute[185173]: 2026-01-23 11:48:11.641 185177 DEBUG nova.objects.instance [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 55846fbf-a87a-4cba-be0b-23125d3d9ef4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 23 11:48:13 compute-0 nova_compute[185173]: 2026-01-23 11:48:13.463 185177 DEBUG nova.network.neutron [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] [instance: 55846fbf-a87a-4cba-be0b-23125d3d9ef4] Updating instance_info_cache with network_info: [{"id": "4c18896b-ecf0-4d1b-b901-f24edce45c11", "address": "fa:16:3e:e4:21:a1", "network": {"id": "9d2c33ef-0f52-43b5-80dd-899657aece53", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.65", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bd16a0de2f5e4a8480a855ef0e1a3f14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c18896b-ec", "ovs_interfaceid": "4c18896b-ecf0-4d1b-b901-f24edce45c11", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 23 11:48:13 compute-0 nova_compute[185173]: 2026-01-23 11:48:13.486 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Releasing lock "refresh_cache-55846fbf-a87a-4cba-be0b-23125d3d9ef4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 23 11:48:13 compute-0 nova_compute[185173]: 2026-01-23 11:48:13.486 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] [instance: 55846fbf-a87a-4cba-be0b-23125d3d9ef4] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 23 11:48:13 compute-0 nova_compute[185173]: 2026-01-23 11:48:13.487 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:48:13 compute-0 nova_compute[185173]: 2026-01-23 11:48:13.487 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:48:13 compute-0 nova_compute[185173]: 2026-01-23 11:48:13.488 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 23 11:48:13 compute-0 nova_compute[185173]: 2026-01-23 11:48:13.506 185177 DEBUG oslo_concurrency.lockutils [None req-bc8342c0-d98b-4c89-9f24-c1d6f7aa7459 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Acquiring lock "84b3f69a-6ab7-406d-939b-a485518755a5" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 11:48:13 compute-0 nova_compute[185173]: 2026-01-23 11:48:13.507 185177 DEBUG oslo_concurrency.lockutils [None req-bc8342c0-d98b-4c89-9f24-c1d6f7aa7459 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Lock "84b3f69a-6ab7-406d-939b-a485518755a5" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 11:48:13 compute-0 nova_compute[185173]: 2026-01-23 11:48:13.525 185177 DEBUG nova.compute.manager [None req-bc8342c0-d98b-4c89-9f24-c1d6f7aa7459 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: 84b3f69a-6ab7-406d-939b-a485518755a5] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 23 11:48:13 compute-0 nova_compute[185173]: 2026-01-23 11:48:13.608 185177 DEBUG oslo_concurrency.lockutils [None req-bc8342c0-d98b-4c89-9f24-c1d6f7aa7459 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 11:48:13 compute-0 nova_compute[185173]: 2026-01-23 11:48:13.609 185177 DEBUG oslo_concurrency.lockutils [None req-bc8342c0-d98b-4c89-9f24-c1d6f7aa7459 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 11:48:13 compute-0 nova_compute[185173]: 2026-01-23 11:48:13.616 185177 DEBUG nova.virt.hardware [None req-bc8342c0-d98b-4c89-9f24-c1d6f7aa7459 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 23 11:48:13 compute-0 nova_compute[185173]: 2026-01-23 11:48:13.616 185177 INFO nova.compute.claims [None req-bc8342c0-d98b-4c89-9f24-c1d6f7aa7459 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: 84b3f69a-6ab7-406d-939b-a485518755a5] Claim successful on node compute-0.ctlplane.example.com
Jan 23 11:48:13 compute-0 podman[238771]: 2026-01-23 11:48:13.752840956 +0000 UTC m=+0.080617718 container health_status 6ec039018dddd109dd56b3f3912ce4a80c166b5fb98c417c5e3cfbbdfbfbeaad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260120, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute)
Jan 23 11:48:13 compute-0 podman[238770]: 2026-01-23 11:48:13.754741632 +0000 UTC m=+0.085486017 container health_status 48bfd3e93cfb033a8917f154ab637a84f3f60f7609564292c230ce848bae7693 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 23 11:48:13 compute-0 nova_compute[185173]: 2026-01-23 11:48:13.768 185177 DEBUG nova.compute.provider_tree [None req-bc8342c0-d98b-4c89-9f24-c1d6f7aa7459 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Inventory has not changed in ProviderTree for provider: 77dd020c-2f5c-40b0-b660-8a95a28aabbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 23 11:48:13 compute-0 nova_compute[185173]: 2026-01-23 11:48:13.784 185177 DEBUG nova.scheduler.client.report [None req-bc8342c0-d98b-4c89-9f24-c1d6f7aa7459 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Inventory has not changed for provider 77dd020c-2f5c-40b0-b660-8a95a28aabbd based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 23 11:48:13 compute-0 nova_compute[185173]: 2026-01-23 11:48:13.810 185177 DEBUG oslo_concurrency.lockutils [None req-bc8342c0-d98b-4c89-9f24-c1d6f7aa7459 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.201s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 11:48:13 compute-0 nova_compute[185173]: 2026-01-23 11:48:13.811 185177 DEBUG nova.compute.manager [None req-bc8342c0-d98b-4c89-9f24-c1d6f7aa7459 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: 84b3f69a-6ab7-406d-939b-a485518755a5] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 23 11:48:13 compute-0 nova_compute[185173]: 2026-01-23 11:48:13.868 185177 DEBUG nova.compute.manager [None req-bc8342c0-d98b-4c89-9f24-c1d6f7aa7459 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: 84b3f69a-6ab7-406d-939b-a485518755a5] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 23 11:48:13 compute-0 nova_compute[185173]: 2026-01-23 11:48:13.868 185177 DEBUG nova.network.neutron [None req-bc8342c0-d98b-4c89-9f24-c1d6f7aa7459 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: 84b3f69a-6ab7-406d-939b-a485518755a5] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 23 11:48:13 compute-0 nova_compute[185173]: 2026-01-23 11:48:13.891 185177 INFO nova.virt.libvirt.driver [None req-bc8342c0-d98b-4c89-9f24-c1d6f7aa7459 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: 84b3f69a-6ab7-406d-939b-a485518755a5] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 23 11:48:13 compute-0 nova_compute[185173]: 2026-01-23 11:48:13.933 185177 DEBUG nova.compute.manager [None req-bc8342c0-d98b-4c89-9f24-c1d6f7aa7459 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: 84b3f69a-6ab7-406d-939b-a485518755a5] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 23 11:48:14 compute-0 nova_compute[185173]: 2026-01-23 11:48:14.069 185177 DEBUG nova.compute.manager [None req-bc8342c0-d98b-4c89-9f24-c1d6f7aa7459 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: 84b3f69a-6ab7-406d-939b-a485518755a5] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 23 11:48:14 compute-0 nova_compute[185173]: 2026-01-23 11:48:14.070 185177 DEBUG nova.virt.libvirt.driver [None req-bc8342c0-d98b-4c89-9f24-c1d6f7aa7459 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: 84b3f69a-6ab7-406d-939b-a485518755a5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 23 11:48:14 compute-0 nova_compute[185173]: 2026-01-23 11:48:14.071 185177 INFO nova.virt.libvirt.driver [None req-bc8342c0-d98b-4c89-9f24-c1d6f7aa7459 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: 84b3f69a-6ab7-406d-939b-a485518755a5] Creating image(s)
Jan 23 11:48:14 compute-0 nova_compute[185173]: 2026-01-23 11:48:14.072 185177 DEBUG oslo_concurrency.lockutils [None req-bc8342c0-d98b-4c89-9f24-c1d6f7aa7459 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Acquiring lock "/var/lib/nova/instances/84b3f69a-6ab7-406d-939b-a485518755a5/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 11:48:14 compute-0 nova_compute[185173]: 2026-01-23 11:48:14.072 185177 DEBUG oslo_concurrency.lockutils [None req-bc8342c0-d98b-4c89-9f24-c1d6f7aa7459 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Lock "/var/lib/nova/instances/84b3f69a-6ab7-406d-939b-a485518755a5/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 11:48:14 compute-0 nova_compute[185173]: 2026-01-23 11:48:14.074 185177 DEBUG oslo_concurrency.lockutils [None req-bc8342c0-d98b-4c89-9f24-c1d6f7aa7459 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Lock "/var/lib/nova/instances/84b3f69a-6ab7-406d-939b-a485518755a5/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 11:48:14 compute-0 nova_compute[185173]: 2026-01-23 11:48:14.087 185177 DEBUG oslo_concurrency.processutils [None req-bc8342c0-d98b-4c89-9f24-c1d6f7aa7459 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/80c014b261205a8ef2db68f438805c389e810b13 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:48:14 compute-0 nova_compute[185173]: 2026-01-23 11:48:14.143 185177 DEBUG oslo_concurrency.processutils [None req-bc8342c0-d98b-4c89-9f24-c1d6f7aa7459 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/80c014b261205a8ef2db68f438805c389e810b13 --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:48:14 compute-0 nova_compute[185173]: 2026-01-23 11:48:14.144 185177 DEBUG oslo_concurrency.lockutils [None req-bc8342c0-d98b-4c89-9f24-c1d6f7aa7459 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Acquiring lock "80c014b261205a8ef2db68f438805c389e810b13" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 11:48:14 compute-0 nova_compute[185173]: 2026-01-23 11:48:14.144 185177 DEBUG oslo_concurrency.lockutils [None req-bc8342c0-d98b-4c89-9f24-c1d6f7aa7459 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Lock "80c014b261205a8ef2db68f438805c389e810b13" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 11:48:14 compute-0 nova_compute[185173]: 2026-01-23 11:48:14.157 185177 DEBUG oslo_concurrency.processutils [None req-bc8342c0-d98b-4c89-9f24-c1d6f7aa7459 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/80c014b261205a8ef2db68f438805c389e810b13 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:48:14 compute-0 nova_compute[185173]: 2026-01-23 11:48:14.215 185177 DEBUG oslo_concurrency.processutils [None req-bc8342c0-d98b-4c89-9f24-c1d6f7aa7459 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/80c014b261205a8ef2db68f438805c389e810b13 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:48:14 compute-0 nova_compute[185173]: 2026-01-23 11:48:14.216 185177 DEBUG oslo_concurrency.processutils [None req-bc8342c0-d98b-4c89-9f24-c1d6f7aa7459 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/80c014b261205a8ef2db68f438805c389e810b13,backing_fmt=raw /var/lib/nova/instances/84b3f69a-6ab7-406d-939b-a485518755a5/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:48:14 compute-0 nova_compute[185173]: 2026-01-23 11:48:14.252 185177 DEBUG oslo_concurrency.processutils [None req-bc8342c0-d98b-4c89-9f24-c1d6f7aa7459 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/80c014b261205a8ef2db68f438805c389e810b13,backing_fmt=raw /var/lib/nova/instances/84b3f69a-6ab7-406d-939b-a485518755a5/disk 1073741824" returned: 0 in 0.036s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:48:14 compute-0 nova_compute[185173]: 2026-01-23 11:48:14.253 185177 DEBUG oslo_concurrency.lockutils [None req-bc8342c0-d98b-4c89-9f24-c1d6f7aa7459 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Lock "80c014b261205a8ef2db68f438805c389e810b13" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.109s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 11:48:14 compute-0 nova_compute[185173]: 2026-01-23 11:48:14.254 185177 DEBUG oslo_concurrency.processutils [None req-bc8342c0-d98b-4c89-9f24-c1d6f7aa7459 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/80c014b261205a8ef2db68f438805c389e810b13 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:48:14 compute-0 nova_compute[185173]: 2026-01-23 11:48:14.308 185177 DEBUG oslo_concurrency.processutils [None req-bc8342c0-d98b-4c89-9f24-c1d6f7aa7459 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/80c014b261205a8ef2db68f438805c389e810b13 --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:48:14 compute-0 nova_compute[185173]: 2026-01-23 11:48:14.309 185177 DEBUG nova.virt.disk.api [None req-bc8342c0-d98b-4c89-9f24-c1d6f7aa7459 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Checking if we can resize image /var/lib/nova/instances/84b3f69a-6ab7-406d-939b-a485518755a5/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Jan 23 11:48:14 compute-0 nova_compute[185173]: 2026-01-23 11:48:14.309 185177 DEBUG oslo_concurrency.processutils [None req-bc8342c0-d98b-4c89-9f24-c1d6f7aa7459 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/84b3f69a-6ab7-406d-939b-a485518755a5/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:48:14 compute-0 nova_compute[185173]: 2026-01-23 11:48:14.365 185177 DEBUG oslo_concurrency.processutils [None req-bc8342c0-d98b-4c89-9f24-c1d6f7aa7459 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/84b3f69a-6ab7-406d-939b-a485518755a5/disk --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:48:14 compute-0 nova_compute[185173]: 2026-01-23 11:48:14.366 185177 DEBUG nova.virt.disk.api [None req-bc8342c0-d98b-4c89-9f24-c1d6f7aa7459 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Cannot resize image /var/lib/nova/instances/84b3f69a-6ab7-406d-939b-a485518755a5/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Jan 23 11:48:14 compute-0 nova_compute[185173]: 2026-01-23 11:48:14.367 185177 DEBUG nova.objects.instance [None req-bc8342c0-d98b-4c89-9f24-c1d6f7aa7459 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Lazy-loading 'migration_context' on Instance uuid 84b3f69a-6ab7-406d-939b-a485518755a5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 23 11:48:14 compute-0 nova_compute[185173]: 2026-01-23 11:48:14.384 185177 DEBUG oslo_concurrency.lockutils [None req-bc8342c0-d98b-4c89-9f24-c1d6f7aa7459 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Acquiring lock "/var/lib/nova/instances/84b3f69a-6ab7-406d-939b-a485518755a5/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 11:48:14 compute-0 nova_compute[185173]: 2026-01-23 11:48:14.385 185177 DEBUG oslo_concurrency.lockutils [None req-bc8342c0-d98b-4c89-9f24-c1d6f7aa7459 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Lock "/var/lib/nova/instances/84b3f69a-6ab7-406d-939b-a485518755a5/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 11:48:14 compute-0 nova_compute[185173]: 2026-01-23 11:48:14.385 185177 DEBUG oslo_concurrency.lockutils [None req-bc8342c0-d98b-4c89-9f24-c1d6f7aa7459 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Lock "/var/lib/nova/instances/84b3f69a-6ab7-406d-939b-a485518755a5/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 11:48:14 compute-0 nova_compute[185173]: 2026-01-23 11:48:14.402 185177 DEBUG oslo_concurrency.processutils [None req-bc8342c0-d98b-4c89-9f24-c1d6f7aa7459 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:48:14 compute-0 nova_compute[185173]: 2026-01-23 11:48:14.459 185177 DEBUG oslo_concurrency.processutils [None req-bc8342c0-d98b-4c89-9f24-c1d6f7aa7459 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:48:14 compute-0 nova_compute[185173]: 2026-01-23 11:48:14.460 185177 DEBUG oslo_concurrency.lockutils [None req-bc8342c0-d98b-4c89-9f24-c1d6f7aa7459 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 11:48:14 compute-0 nova_compute[185173]: 2026-01-23 11:48:14.461 185177 DEBUG oslo_concurrency.lockutils [None req-bc8342c0-d98b-4c89-9f24-c1d6f7aa7459 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 11:48:14 compute-0 nova_compute[185173]: 2026-01-23 11:48:14.471 185177 DEBUG oslo_concurrency.processutils [None req-bc8342c0-d98b-4c89-9f24-c1d6f7aa7459 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:48:14 compute-0 nova_compute[185173]: 2026-01-23 11:48:14.525 185177 DEBUG oslo_concurrency.processutils [None req-bc8342c0-d98b-4c89-9f24-c1d6f7aa7459 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:48:14 compute-0 nova_compute[185173]: 2026-01-23 11:48:14.526 185177 DEBUG oslo_concurrency.processutils [None req-bc8342c0-d98b-4c89-9f24-c1d6f7aa7459 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/84b3f69a-6ab7-406d-939b-a485518755a5/disk.eph0 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:48:14 compute-0 nova_compute[185173]: 2026-01-23 11:48:14.584 185177 DEBUG oslo_concurrency.processutils [None req-bc8342c0-d98b-4c89-9f24-c1d6f7aa7459 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/84b3f69a-6ab7-406d-939b-a485518755a5/disk.eph0 1073741824" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:48:14 compute-0 nova_compute[185173]: 2026-01-23 11:48:14.585 185177 DEBUG oslo_concurrency.lockutils [None req-bc8342c0-d98b-4c89-9f24-c1d6f7aa7459 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.124s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 11:48:14 compute-0 nova_compute[185173]: 2026-01-23 11:48:14.586 185177 DEBUG oslo_concurrency.processutils [None req-bc8342c0-d98b-4c89-9f24-c1d6f7aa7459 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:48:14 compute-0 nova_compute[185173]: 2026-01-23 11:48:14.653 185177 DEBUG oslo_concurrency.processutils [None req-bc8342c0-d98b-4c89-9f24-c1d6f7aa7459 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:48:14 compute-0 nova_compute[185173]: 2026-01-23 11:48:14.655 185177 DEBUG nova.virt.libvirt.driver [None req-bc8342c0-d98b-4c89-9f24-c1d6f7aa7459 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: 84b3f69a-6ab7-406d-939b-a485518755a5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 23 11:48:14 compute-0 nova_compute[185173]: 2026-01-23 11:48:14.655 185177 DEBUG nova.virt.libvirt.driver [None req-bc8342c0-d98b-4c89-9f24-c1d6f7aa7459 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: 84b3f69a-6ab7-406d-939b-a485518755a5] Ensure instance console log exists: /var/lib/nova/instances/84b3f69a-6ab7-406d-939b-a485518755a5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 23 11:48:14 compute-0 nova_compute[185173]: 2026-01-23 11:48:14.656 185177 DEBUG oslo_concurrency.lockutils [None req-bc8342c0-d98b-4c89-9f24-c1d6f7aa7459 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 11:48:14 compute-0 nova_compute[185173]: 2026-01-23 11:48:14.657 185177 DEBUG oslo_concurrency.lockutils [None req-bc8342c0-d98b-4c89-9f24-c1d6f7aa7459 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 11:48:14 compute-0 nova_compute[185173]: 2026-01-23 11:48:14.657 185177 DEBUG oslo_concurrency.lockutils [None req-bc8342c0-d98b-4c89-9f24-c1d6f7aa7459 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 11:48:15 compute-0 nova_compute[185173]: 2026-01-23 11:48:15.711 185177 DEBUG nova.network.neutron [None req-bc8342c0-d98b-4c89-9f24-c1d6f7aa7459 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: 84b3f69a-6ab7-406d-939b-a485518755a5] Successfully updated port: 05dcc60f-5c09-47f3-9834-3594bf71b68e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 23 11:48:15 compute-0 nova_compute[185173]: 2026-01-23 11:48:15.727 185177 DEBUG oslo_concurrency.lockutils [None req-bc8342c0-d98b-4c89-9f24-c1d6f7aa7459 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Acquiring lock "refresh_cache-84b3f69a-6ab7-406d-939b-a485518755a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 23 11:48:15 compute-0 nova_compute[185173]: 2026-01-23 11:48:15.727 185177 DEBUG oslo_concurrency.lockutils [None req-bc8342c0-d98b-4c89-9f24-c1d6f7aa7459 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Acquired lock "refresh_cache-84b3f69a-6ab7-406d-939b-a485518755a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 23 11:48:15 compute-0 nova_compute[185173]: 2026-01-23 11:48:15.727 185177 DEBUG nova.network.neutron [None req-bc8342c0-d98b-4c89-9f24-c1d6f7aa7459 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: 84b3f69a-6ab7-406d-939b-a485518755a5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 23 11:48:15 compute-0 podman[238841]: 2026-01-23 11:48:15.738626872 +0000 UTC m=+0.062054141 container health_status d96827cd9c29e53bbdf4cef10942608e4ba405294733072b4aa624c0238e2ed8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 23 11:48:15 compute-0 nova_compute[185173]: 2026-01-23 11:48:15.740 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:48:15 compute-0 nova_compute[185173]: 2026-01-23 11:48:15.835 185177 DEBUG nova.compute.manager [req-f5fa4aca-0ba4-4202-9d25-f1de3a5be02d req-009c3f23-8caa-4568-ae9b-284b56f907bc e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] [instance: 84b3f69a-6ab7-406d-939b-a485518755a5] Received event network-changed-05dcc60f-5c09-47f3-9834-3594bf71b68e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 23 11:48:15 compute-0 nova_compute[185173]: 2026-01-23 11:48:15.835 185177 DEBUG nova.compute.manager [req-f5fa4aca-0ba4-4202-9d25-f1de3a5be02d req-009c3f23-8caa-4568-ae9b-284b56f907bc e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] [instance: 84b3f69a-6ab7-406d-939b-a485518755a5] Refreshing instance network info cache due to event network-changed-05dcc60f-5c09-47f3-9834-3594bf71b68e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 23 11:48:15 compute-0 nova_compute[185173]: 2026-01-23 11:48:15.835 185177 DEBUG oslo_concurrency.lockutils [req-f5fa4aca-0ba4-4202-9d25-f1de3a5be02d req-009c3f23-8caa-4568-ae9b-284b56f907bc e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] Acquiring lock "refresh_cache-84b3f69a-6ab7-406d-939b-a485518755a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 23 11:48:15 compute-0 nova_compute[185173]: 2026-01-23 11:48:15.935 185177 DEBUG nova.network.neutron [None req-bc8342c0-d98b-4c89-9f24-c1d6f7aa7459 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: 84b3f69a-6ab7-406d-939b-a485518755a5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 23 11:48:16 compute-0 nova_compute[185173]: 2026-01-23 11:48:16.424 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:48:17 compute-0 nova_compute[185173]: 2026-01-23 11:48:17.009 185177 DEBUG nova.network.neutron [None req-bc8342c0-d98b-4c89-9f24-c1d6f7aa7459 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: 84b3f69a-6ab7-406d-939b-a485518755a5] Updating instance_info_cache with network_info: [{"id": "05dcc60f-5c09-47f3-9834-3594bf71b68e", "address": "fa:16:3e:40:4f:a6", "network": {"id": "9d2c33ef-0f52-43b5-80dd-899657aece53", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.62", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bd16a0de2f5e4a8480a855ef0e1a3f14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05dcc60f-5c", "ovs_interfaceid": "05dcc60f-5c09-47f3-9834-3594bf71b68e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 23 11:48:17 compute-0 nova_compute[185173]: 2026-01-23 11:48:17.030 185177 DEBUG oslo_concurrency.lockutils [None req-bc8342c0-d98b-4c89-9f24-c1d6f7aa7459 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Releasing lock "refresh_cache-84b3f69a-6ab7-406d-939b-a485518755a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 23 11:48:17 compute-0 nova_compute[185173]: 2026-01-23 11:48:17.031 185177 DEBUG nova.compute.manager [None req-bc8342c0-d98b-4c89-9f24-c1d6f7aa7459 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: 84b3f69a-6ab7-406d-939b-a485518755a5] Instance network_info: |[{"id": "05dcc60f-5c09-47f3-9834-3594bf71b68e", "address": "fa:16:3e:40:4f:a6", "network": {"id": "9d2c33ef-0f52-43b5-80dd-899657aece53", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.62", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bd16a0de2f5e4a8480a855ef0e1a3f14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05dcc60f-5c", "ovs_interfaceid": "05dcc60f-5c09-47f3-9834-3594bf71b68e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 23 11:48:17 compute-0 nova_compute[185173]: 2026-01-23 11:48:17.031 185177 DEBUG oslo_concurrency.lockutils [req-f5fa4aca-0ba4-4202-9d25-f1de3a5be02d req-009c3f23-8caa-4568-ae9b-284b56f907bc e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] Acquired lock "refresh_cache-84b3f69a-6ab7-406d-939b-a485518755a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 23 11:48:17 compute-0 nova_compute[185173]: 2026-01-23 11:48:17.032 185177 DEBUG nova.network.neutron [req-f5fa4aca-0ba4-4202-9d25-f1de3a5be02d req-009c3f23-8caa-4568-ae9b-284b56f907bc e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] [instance: 84b3f69a-6ab7-406d-939b-a485518755a5] Refreshing network info cache for port 05dcc60f-5c09-47f3-9834-3594bf71b68e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 23 11:48:17 compute-0 nova_compute[185173]: 2026-01-23 11:48:17.037 185177 DEBUG nova.virt.libvirt.driver [None req-bc8342c0-d98b-4c89-9f24-c1d6f7aa7459 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: 84b3f69a-6ab7-406d-939b-a485518755a5] Start _get_guest_xml network_info=[{"id": "05dcc60f-5c09-47f3-9834-3594bf71b68e", "address": "fa:16:3e:40:4f:a6", "network": {"id": "9d2c33ef-0f52-43b5-80dd-899657aece53", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.62", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bd16a0de2f5e4a8480a855ef0e1a3f14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05dcc60f-5c", "ovs_interfaceid": "05dcc60f-5c09-47f3-9834-3594bf71b68e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2026-01-23T11:45:38Z,direct_url=<?>,disk_format='qcow2',id=c5833e41-b4db-454e-8f49-014aa18c7dc5,min_disk=0,min_ram=0,name='cirros',owner='bd16a0de2f5e4a8480a855ef0e1a3f14',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2026-01-23T11:45:39Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'disk_bus': 'virtio', 'encrypted': False, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'size': 0, 'encryption_options': None, 'encryption_secret_uuid': None, 'boot_index': 0, 'image_id': 'c5833e41-b4db-454e-8f49-014aa18c7dc5'}], 'ephemerals': [{'encryption_secret_uuid': None, 'encryption_format': None, 'disk_bus': 'virtio', 'encrypted': False, 'device_type': 'disk', 'device_name': '/dev/vdb', 'size': 1, 'encryption_options': None, 'guest_format': None}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 23 11:48:17 compute-0 nova_compute[185173]: 2026-01-23 11:48:17.049 185177 WARNING nova.virt.libvirt.driver [None req-bc8342c0-d98b-4c89-9f24-c1d6f7aa7459 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 23 11:48:17 compute-0 nova_compute[185173]: 2026-01-23 11:48:17.067 185177 DEBUG nova.virt.libvirt.host [None req-bc8342c0-d98b-4c89-9f24-c1d6f7aa7459 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 23 11:48:17 compute-0 nova_compute[185173]: 2026-01-23 11:48:17.068 185177 DEBUG nova.virt.libvirt.host [None req-bc8342c0-d98b-4c89-9f24-c1d6f7aa7459 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 23 11:48:17 compute-0 nova_compute[185173]: 2026-01-23 11:48:17.074 185177 DEBUG nova.virt.libvirt.host [None req-bc8342c0-d98b-4c89-9f24-c1d6f7aa7459 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 23 11:48:17 compute-0 nova_compute[185173]: 2026-01-23 11:48:17.074 185177 DEBUG nova.virt.libvirt.host [None req-bc8342c0-d98b-4c89-9f24-c1d6f7aa7459 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 23 11:48:17 compute-0 nova_compute[185173]: 2026-01-23 11:48:17.075 185177 DEBUG nova.virt.libvirt.driver [None req-bc8342c0-d98b-4c89-9f24-c1d6f7aa7459 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 23 11:48:17 compute-0 nova_compute[185173]: 2026-01-23 11:48:17.076 185177 DEBUG nova.virt.hardware [None req-bc8342c0-d98b-4c89-9f24-c1d6f7aa7459 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-23T11:45:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='f2c5c5dd-a580-4885-a3ab-a766eac401c8',id=1,is_public=True,memory_mb=512,name='m1.small',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2026-01-23T11:45:38Z,direct_url=<?>,disk_format='qcow2',id=c5833e41-b4db-454e-8f49-014aa18c7dc5,min_disk=0,min_ram=0,name='cirros',owner='bd16a0de2f5e4a8480a855ef0e1a3f14',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2026-01-23T11:45:39Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 23 11:48:17 compute-0 nova_compute[185173]: 2026-01-23 11:48:17.076 185177 DEBUG nova.virt.hardware [None req-bc8342c0-d98b-4c89-9f24-c1d6f7aa7459 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 23 11:48:17 compute-0 nova_compute[185173]: 2026-01-23 11:48:17.077 185177 DEBUG nova.virt.hardware [None req-bc8342c0-d98b-4c89-9f24-c1d6f7aa7459 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 23 11:48:17 compute-0 nova_compute[185173]: 2026-01-23 11:48:17.077 185177 DEBUG nova.virt.hardware [None req-bc8342c0-d98b-4c89-9f24-c1d6f7aa7459 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 23 11:48:17 compute-0 nova_compute[185173]: 2026-01-23 11:48:17.078 185177 DEBUG nova.virt.hardware [None req-bc8342c0-d98b-4c89-9f24-c1d6f7aa7459 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 23 11:48:17 compute-0 nova_compute[185173]: 2026-01-23 11:48:17.078 185177 DEBUG nova.virt.hardware [None req-bc8342c0-d98b-4c89-9f24-c1d6f7aa7459 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 23 11:48:17 compute-0 nova_compute[185173]: 2026-01-23 11:48:17.078 185177 DEBUG nova.virt.hardware [None req-bc8342c0-d98b-4c89-9f24-c1d6f7aa7459 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 23 11:48:17 compute-0 nova_compute[185173]: 2026-01-23 11:48:17.079 185177 DEBUG nova.virt.hardware [None req-bc8342c0-d98b-4c89-9f24-c1d6f7aa7459 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 23 11:48:17 compute-0 nova_compute[185173]: 2026-01-23 11:48:17.079 185177 DEBUG nova.virt.hardware [None req-bc8342c0-d98b-4c89-9f24-c1d6f7aa7459 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 23 11:48:17 compute-0 nova_compute[185173]: 2026-01-23 11:48:17.080 185177 DEBUG nova.virt.hardware [None req-bc8342c0-d98b-4c89-9f24-c1d6f7aa7459 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 23 11:48:17 compute-0 nova_compute[185173]: 2026-01-23 11:48:17.080 185177 DEBUG nova.virt.hardware [None req-bc8342c0-d98b-4c89-9f24-c1d6f7aa7459 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 23 11:48:17 compute-0 nova_compute[185173]: 2026-01-23 11:48:17.086 185177 DEBUG nova.virt.libvirt.vif [None req-bc8342c0-d98b-4c89-9f24-c1d6f7aa7459 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-23T11:48:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-i4gqh4k-vr2au76lt4jq-fptc6vwdy3ol-vnf-bciscawcuiyk',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-i4gqh4k-vr2au76lt4jq-fptc6vwdy3ol-vnf-bciscawcuiyk',id=2,image_ref='c5833e41-b4db-454e-8f49-014aa18c7dc5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='500baa09-1e39-474e-b275-8b2dffe3a65b'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bd16a0de2f5e4a8480a855ef0e1a3f14',ramdisk_id='',reservation_id='r-xw0cqszz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,admin,reader',image_base_image_ref='c5833e41-b4db-454e-8f49-014aa18c7dc5',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T11:48:13Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT00NjI0MTEzNzgwMzUzNzg1MjgxPT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTQ2MjQxMTM3ODAzNTM3ODUyODE9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09NDYyNDExMzc4MDM1Mzc4NTI4MT09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTQ2MjQxMTM3ODAzNTM3ODUyODE9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT00NjI0MTEzNzgwMzUzNzg1MjgxPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT00NjI0MTEzNzgwMzUzNzg1MjgxPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJnc
Jan 23 11:48:17 compute-0 nova_compute[185173]: ywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09NDYyNDExMzc4MDM1Mzc4NTI4MT09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTQ2MjQxMTM3ODAzNTM3ODUyODE9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT00NjI0MTEzNzgwMzUzNzg1MjgxPT0tLQo=',user_id='d9858533c2284846a8f0f19a1fb45045',uuid=84b3f69a-6ab7-406d-939b-a485518755a5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "05dcc60f-5c09-47f3-9834-3594bf71b68e", "address": "fa:16:3e:40:4f:a6", "network": {"id": "9d2c33ef-0f52-43b5-80dd-899657aece53", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.62", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bd16a0de2f5e4a8480a855ef0e1a3f14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05dcc60f-5c", "ovs_interfaceid": "05dcc60f-5c09-47f3-9834-3594bf71b68e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 23 11:48:17 compute-0 nova_compute[185173]: 2026-01-23 11:48:17.086 185177 DEBUG nova.network.os_vif_util [None req-bc8342c0-d98b-4c89-9f24-c1d6f7aa7459 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Converting VIF {"id": "05dcc60f-5c09-47f3-9834-3594bf71b68e", "address": "fa:16:3e:40:4f:a6", "network": {"id": "9d2c33ef-0f52-43b5-80dd-899657aece53", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.62", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bd16a0de2f5e4a8480a855ef0e1a3f14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05dcc60f-5c", "ovs_interfaceid": "05dcc60f-5c09-47f3-9834-3594bf71b68e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 23 11:48:17 compute-0 nova_compute[185173]: 2026-01-23 11:48:17.088 185177 DEBUG nova.network.os_vif_util [None req-bc8342c0-d98b-4c89-9f24-c1d6f7aa7459 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:40:4f:a6,bridge_name='br-int',has_traffic_filtering=True,id=05dcc60f-5c09-47f3-9834-3594bf71b68e,network=Network(9d2c33ef-0f52-43b5-80dd-899657aece53),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap05dcc60f-5c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 23 11:48:17 compute-0 nova_compute[185173]: 2026-01-23 11:48:17.089 185177 DEBUG nova.objects.instance [None req-bc8342c0-d98b-4c89-9f24-c1d6f7aa7459 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Lazy-loading 'pci_devices' on Instance uuid 84b3f69a-6ab7-406d-939b-a485518755a5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 23 11:48:17 compute-0 nova_compute[185173]: 2026-01-23 11:48:17.104 185177 DEBUG nova.virt.libvirt.driver [None req-bc8342c0-d98b-4c89-9f24-c1d6f7aa7459 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: 84b3f69a-6ab7-406d-939b-a485518755a5] End _get_guest_xml xml=<domain type="kvm">
Jan 23 11:48:17 compute-0 nova_compute[185173]:   <uuid>84b3f69a-6ab7-406d-939b-a485518755a5</uuid>
Jan 23 11:48:17 compute-0 nova_compute[185173]:   <name>instance-00000002</name>
Jan 23 11:48:17 compute-0 nova_compute[185173]:   <memory>524288</memory>
Jan 23 11:48:17 compute-0 nova_compute[185173]:   <vcpu>1</vcpu>
Jan 23 11:48:17 compute-0 nova_compute[185173]:   <metadata>
Jan 23 11:48:17 compute-0 nova_compute[185173]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 23 11:48:17 compute-0 nova_compute[185173]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 23 11:48:17 compute-0 nova_compute[185173]:       <nova:name>vn-i4gqh4k-vr2au76lt4jq-fptc6vwdy3ol-vnf-bciscawcuiyk</nova:name>
Jan 23 11:48:17 compute-0 nova_compute[185173]:       <nova:creationTime>2026-01-23 11:48:17</nova:creationTime>
Jan 23 11:48:17 compute-0 nova_compute[185173]:       <nova:flavor name="m1.small">
Jan 23 11:48:17 compute-0 nova_compute[185173]:         <nova:memory>512</nova:memory>
Jan 23 11:48:17 compute-0 nova_compute[185173]:         <nova:disk>1</nova:disk>
Jan 23 11:48:17 compute-0 nova_compute[185173]:         <nova:swap>0</nova:swap>
Jan 23 11:48:17 compute-0 nova_compute[185173]:         <nova:ephemeral>1</nova:ephemeral>
Jan 23 11:48:17 compute-0 nova_compute[185173]:         <nova:vcpus>1</nova:vcpus>
Jan 23 11:48:17 compute-0 nova_compute[185173]:       </nova:flavor>
Jan 23 11:48:17 compute-0 nova_compute[185173]:       <nova:owner>
Jan 23 11:48:17 compute-0 nova_compute[185173]:         <nova:user uuid="d9858533c2284846a8f0f19a1fb45045">admin</nova:user>
Jan 23 11:48:17 compute-0 nova_compute[185173]:         <nova:project uuid="bd16a0de2f5e4a8480a855ef0e1a3f14">admin</nova:project>
Jan 23 11:48:17 compute-0 nova_compute[185173]:       </nova:owner>
Jan 23 11:48:17 compute-0 nova_compute[185173]:       <nova:root type="image" uuid="c5833e41-b4db-454e-8f49-014aa18c7dc5"/>
Jan 23 11:48:17 compute-0 nova_compute[185173]:       <nova:ports>
Jan 23 11:48:17 compute-0 nova_compute[185173]:         <nova:port uuid="05dcc60f-5c09-47f3-9834-3594bf71b68e">
Jan 23 11:48:17 compute-0 nova_compute[185173]:           <nova:ip type="fixed" address="192.168.0.62" ipVersion="4"/>
Jan 23 11:48:17 compute-0 nova_compute[185173]:         </nova:port>
Jan 23 11:48:17 compute-0 nova_compute[185173]:       </nova:ports>
Jan 23 11:48:17 compute-0 nova_compute[185173]:     </nova:instance>
Jan 23 11:48:17 compute-0 nova_compute[185173]:   </metadata>
Jan 23 11:48:17 compute-0 nova_compute[185173]:   <sysinfo type="smbios">
Jan 23 11:48:17 compute-0 nova_compute[185173]:     <system>
Jan 23 11:48:17 compute-0 nova_compute[185173]:       <entry name="manufacturer">RDO</entry>
Jan 23 11:48:17 compute-0 nova_compute[185173]:       <entry name="product">OpenStack Compute</entry>
Jan 23 11:48:17 compute-0 nova_compute[185173]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 23 11:48:17 compute-0 nova_compute[185173]:       <entry name="serial">84b3f69a-6ab7-406d-939b-a485518755a5</entry>
Jan 23 11:48:17 compute-0 nova_compute[185173]:       <entry name="uuid">84b3f69a-6ab7-406d-939b-a485518755a5</entry>
Jan 23 11:48:17 compute-0 nova_compute[185173]:       <entry name="family">Virtual Machine</entry>
Jan 23 11:48:17 compute-0 nova_compute[185173]:     </system>
Jan 23 11:48:17 compute-0 nova_compute[185173]:   </sysinfo>
Jan 23 11:48:17 compute-0 nova_compute[185173]:   <os>
Jan 23 11:48:17 compute-0 nova_compute[185173]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 23 11:48:17 compute-0 nova_compute[185173]:     <boot dev="hd"/>
Jan 23 11:48:17 compute-0 nova_compute[185173]:     <smbios mode="sysinfo"/>
Jan 23 11:48:17 compute-0 nova_compute[185173]:   </os>
Jan 23 11:48:17 compute-0 nova_compute[185173]:   <features>
Jan 23 11:48:17 compute-0 nova_compute[185173]:     <acpi/>
Jan 23 11:48:17 compute-0 nova_compute[185173]:     <apic/>
Jan 23 11:48:17 compute-0 nova_compute[185173]:     <vmcoreinfo/>
Jan 23 11:48:17 compute-0 nova_compute[185173]:   </features>
Jan 23 11:48:17 compute-0 nova_compute[185173]:   <clock offset="utc">
Jan 23 11:48:17 compute-0 nova_compute[185173]:     <timer name="pit" tickpolicy="delay"/>
Jan 23 11:48:17 compute-0 nova_compute[185173]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 23 11:48:17 compute-0 nova_compute[185173]:     <timer name="hpet" present="no"/>
Jan 23 11:48:17 compute-0 nova_compute[185173]:   </clock>
Jan 23 11:48:17 compute-0 nova_compute[185173]:   <cpu mode="host-model" match="exact">
Jan 23 11:48:17 compute-0 nova_compute[185173]:     <topology sockets="1" cores="1" threads="1"/>
Jan 23 11:48:17 compute-0 nova_compute[185173]:   </cpu>
Jan 23 11:48:17 compute-0 nova_compute[185173]:   <devices>
Jan 23 11:48:17 compute-0 nova_compute[185173]:     <disk type="file" device="disk">
Jan 23 11:48:17 compute-0 nova_compute[185173]:       <driver name="qemu" type="qcow2" cache="none"/>
Jan 23 11:48:17 compute-0 nova_compute[185173]:       <source file="/var/lib/nova/instances/84b3f69a-6ab7-406d-939b-a485518755a5/disk"/>
Jan 23 11:48:17 compute-0 nova_compute[185173]:       <target dev="vda" bus="virtio"/>
Jan 23 11:48:17 compute-0 nova_compute[185173]:     </disk>
Jan 23 11:48:17 compute-0 nova_compute[185173]:     <disk type="file" device="disk">
Jan 23 11:48:17 compute-0 nova_compute[185173]:       <driver name="qemu" type="qcow2" cache="none"/>
Jan 23 11:48:17 compute-0 nova_compute[185173]:       <source file="/var/lib/nova/instances/84b3f69a-6ab7-406d-939b-a485518755a5/disk.eph0"/>
Jan 23 11:48:17 compute-0 nova_compute[185173]:       <target dev="vdb" bus="virtio"/>
Jan 23 11:48:17 compute-0 nova_compute[185173]:     </disk>
Jan 23 11:48:17 compute-0 nova_compute[185173]:     <disk type="file" device="cdrom">
Jan 23 11:48:17 compute-0 nova_compute[185173]:       <driver name="qemu" type="raw" cache="none"/>
Jan 23 11:48:17 compute-0 nova_compute[185173]:       <source file="/var/lib/nova/instances/84b3f69a-6ab7-406d-939b-a485518755a5/disk.config"/>
Jan 23 11:48:17 compute-0 nova_compute[185173]:       <target dev="sda" bus="sata"/>
Jan 23 11:48:17 compute-0 nova_compute[185173]:     </disk>
Jan 23 11:48:17 compute-0 nova_compute[185173]:     <interface type="ethernet">
Jan 23 11:48:17 compute-0 nova_compute[185173]:       <mac address="fa:16:3e:40:4f:a6"/>
Jan 23 11:48:17 compute-0 nova_compute[185173]:       <model type="virtio"/>
Jan 23 11:48:17 compute-0 nova_compute[185173]:       <driver name="vhost" rx_queue_size="512"/>
Jan 23 11:48:17 compute-0 nova_compute[185173]:       <mtu size="1442"/>
Jan 23 11:48:17 compute-0 nova_compute[185173]:       <target dev="tap05dcc60f-5c"/>
Jan 23 11:48:17 compute-0 nova_compute[185173]:     </interface>
Jan 23 11:48:17 compute-0 nova_compute[185173]:     <serial type="pty">
Jan 23 11:48:17 compute-0 nova_compute[185173]:       <log file="/var/lib/nova/instances/84b3f69a-6ab7-406d-939b-a485518755a5/console.log" append="off"/>
Jan 23 11:48:17 compute-0 nova_compute[185173]:     </serial>
Jan 23 11:48:17 compute-0 nova_compute[185173]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 23 11:48:17 compute-0 nova_compute[185173]:     <video>
Jan 23 11:48:17 compute-0 nova_compute[185173]:       <model type="virtio"/>
Jan 23 11:48:17 compute-0 nova_compute[185173]:     </video>
Jan 23 11:48:17 compute-0 nova_compute[185173]:     <input type="tablet" bus="usb"/>
Jan 23 11:48:17 compute-0 nova_compute[185173]:     <rng model="virtio">
Jan 23 11:48:17 compute-0 nova_compute[185173]:       <backend model="random">/dev/urandom</backend>
Jan 23 11:48:17 compute-0 nova_compute[185173]:     </rng>
Jan 23 11:48:17 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root"/>
Jan 23 11:48:17 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 11:48:17 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 11:48:17 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 11:48:17 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 11:48:17 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 11:48:17 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 11:48:17 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 11:48:17 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 11:48:17 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 11:48:17 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 11:48:17 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 11:48:17 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 11:48:17 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 11:48:17 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 11:48:17 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 11:48:17 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 11:48:17 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 11:48:17 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 11:48:17 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 11:48:17 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 11:48:17 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 11:48:17 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 11:48:17 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 11:48:17 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 11:48:17 compute-0 nova_compute[185173]:     <controller type="usb" index="0"/>
Jan 23 11:48:17 compute-0 nova_compute[185173]:     <memballoon model="virtio">
Jan 23 11:48:17 compute-0 nova_compute[185173]:       <stats period="10"/>
Jan 23 11:48:17 compute-0 nova_compute[185173]:     </memballoon>
Jan 23 11:48:17 compute-0 nova_compute[185173]:   </devices>
Jan 23 11:48:17 compute-0 nova_compute[185173]: </domain>
Jan 23 11:48:17 compute-0 nova_compute[185173]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 23 11:48:17 compute-0 nova_compute[185173]: 2026-01-23 11:48:17.105 185177 DEBUG nova.compute.manager [None req-bc8342c0-d98b-4c89-9f24-c1d6f7aa7459 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: 84b3f69a-6ab7-406d-939b-a485518755a5] Preparing to wait for external event network-vif-plugged-05dcc60f-5c09-47f3-9834-3594bf71b68e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 23 11:48:17 compute-0 nova_compute[185173]: 2026-01-23 11:48:17.105 185177 DEBUG oslo_concurrency.lockutils [None req-bc8342c0-d98b-4c89-9f24-c1d6f7aa7459 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Acquiring lock "84b3f69a-6ab7-406d-939b-a485518755a5-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 11:48:17 compute-0 nova_compute[185173]: 2026-01-23 11:48:17.105 185177 DEBUG oslo_concurrency.lockutils [None req-bc8342c0-d98b-4c89-9f24-c1d6f7aa7459 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Lock "84b3f69a-6ab7-406d-939b-a485518755a5-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 11:48:17 compute-0 nova_compute[185173]: 2026-01-23 11:48:17.106 185177 DEBUG oslo_concurrency.lockutils [None req-bc8342c0-d98b-4c89-9f24-c1d6f7aa7459 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Lock "84b3f69a-6ab7-406d-939b-a485518755a5-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 11:48:17 compute-0 nova_compute[185173]: 2026-01-23 11:48:17.107 185177 DEBUG nova.virt.libvirt.vif [None req-bc8342c0-d98b-4c89-9f24-c1d6f7aa7459 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-23T11:48:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-i4gqh4k-vr2au76lt4jq-fptc6vwdy3ol-vnf-bciscawcuiyk',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-i4gqh4k-vr2au76lt4jq-fptc6vwdy3ol-vnf-bciscawcuiyk',id=2,image_ref='c5833e41-b4db-454e-8f49-014aa18c7dc5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='500baa09-1e39-474e-b275-8b2dffe3a65b'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bd16a0de2f5e4a8480a855ef0e1a3f14',ramdisk_id='',reservation_id='r-xw0cqszz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,admin,reader',image_base_image_ref='c5833e41-b4db-454e-8f49-014aa18c7dc5',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T11:48:13Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT00NjI0MTEzNzgwMzUzNzg1MjgxPT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTQ2MjQxMTM3ODAzNTM3ODUyODE9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09NDYyNDExMzc4MDM1Mzc4NTI4MT09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTQ2MjQxMTM3ODAzNTM3ODUyODE9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT00NjI0MTEzNzgwMzUzNzg1MjgxPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT00NjI0MTEzNzgwMzUzNzg1MjgxPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9
Jan 23 11:48:17 compute-0 nova_compute[185173]: wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09NDYyNDExMzc4MDM1Mzc4NTI4MT09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTQ2MjQxMTM3ODAzNTM3ODUyODE9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT00NjI0MTEzNzgwMzUzNzg1MjgxPT0tLQo=',user_id='d9858533c2284846a8f0f19a1fb45045',uuid=84b3f69a-6ab7-406d-939b-a485518755a5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "05dcc60f-5c09-47f3-9834-3594bf71b68e", "address": "fa:16:3e:40:4f:a6", "network": {"id": "9d2c33ef-0f52-43b5-80dd-899657aece53", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.62", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bd16a0de2f5e4a8480a855ef0e1a3f14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05dcc60f-5c", "ovs_interfaceid": "05dcc60f-5c09-47f3-9834-3594bf71b68e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 23 11:48:17 compute-0 nova_compute[185173]: 2026-01-23 11:48:17.107 185177 DEBUG nova.network.os_vif_util [None req-bc8342c0-d98b-4c89-9f24-c1d6f7aa7459 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Converting VIF {"id": "05dcc60f-5c09-47f3-9834-3594bf71b68e", "address": "fa:16:3e:40:4f:a6", "network": {"id": "9d2c33ef-0f52-43b5-80dd-899657aece53", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.62", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bd16a0de2f5e4a8480a855ef0e1a3f14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05dcc60f-5c", "ovs_interfaceid": "05dcc60f-5c09-47f3-9834-3594bf71b68e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 23 11:48:17 compute-0 nova_compute[185173]: 2026-01-23 11:48:17.108 185177 DEBUG nova.network.os_vif_util [None req-bc8342c0-d98b-4c89-9f24-c1d6f7aa7459 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:40:4f:a6,bridge_name='br-int',has_traffic_filtering=True,id=05dcc60f-5c09-47f3-9834-3594bf71b68e,network=Network(9d2c33ef-0f52-43b5-80dd-899657aece53),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap05dcc60f-5c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 23 11:48:17 compute-0 nova_compute[185173]: 2026-01-23 11:48:17.109 185177 DEBUG os_vif [None req-bc8342c0-d98b-4c89-9f24-c1d6f7aa7459 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:40:4f:a6,bridge_name='br-int',has_traffic_filtering=True,id=05dcc60f-5c09-47f3-9834-3594bf71b68e,network=Network(9d2c33ef-0f52-43b5-80dd-899657aece53),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap05dcc60f-5c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 23 11:48:17 compute-0 nova_compute[185173]: 2026-01-23 11:48:17.110 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:48:17 compute-0 nova_compute[185173]: 2026-01-23 11:48:17.111 185177 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 23 11:48:17 compute-0 nova_compute[185173]: 2026-01-23 11:48:17.112 185177 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 23 11:48:17 compute-0 nova_compute[185173]: 2026-01-23 11:48:17.117 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:48:17 compute-0 nova_compute[185173]: 2026-01-23 11:48:17.117 185177 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap05dcc60f-5c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 23 11:48:17 compute-0 nova_compute[185173]: 2026-01-23 11:48:17.118 185177 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap05dcc60f-5c, col_values=(('external_ids', {'iface-id': '05dcc60f-5c09-47f3-9834-3594bf71b68e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:40:4f:a6', 'vm-uuid': '84b3f69a-6ab7-406d-939b-a485518755a5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 23 11:48:17 compute-0 nova_compute[185173]: 2026-01-23 11:48:17.121 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:48:17 compute-0 NetworkManager[56133]: <info>  [1769168897.1247] manager: (tap05dcc60f-5c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/27)
Jan 23 11:48:17 compute-0 nova_compute[185173]: 2026-01-23 11:48:17.125 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 23 11:48:17 compute-0 nova_compute[185173]: 2026-01-23 11:48:17.137 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:48:17 compute-0 nova_compute[185173]: 2026-01-23 11:48:17.138 185177 INFO os_vif [None req-bc8342c0-d98b-4c89-9f24-c1d6f7aa7459 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:40:4f:a6,bridge_name='br-int',has_traffic_filtering=True,id=05dcc60f-5c09-47f3-9834-3594bf71b68e,network=Network(9d2c33ef-0f52-43b5-80dd-899657aece53),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap05dcc60f-5c')
Jan 23 11:48:17 compute-0 nova_compute[185173]: 2026-01-23 11:48:17.202 185177 DEBUG nova.virt.libvirt.driver [None req-bc8342c0-d98b-4c89-9f24-c1d6f7aa7459 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 23 11:48:17 compute-0 nova_compute[185173]: 2026-01-23 11:48:17.203 185177 DEBUG nova.virt.libvirt.driver [None req-bc8342c0-d98b-4c89-9f24-c1d6f7aa7459 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 23 11:48:17 compute-0 nova_compute[185173]: 2026-01-23 11:48:17.203 185177 DEBUG nova.virt.libvirt.driver [None req-bc8342c0-d98b-4c89-9f24-c1d6f7aa7459 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 23 11:48:17 compute-0 nova_compute[185173]: 2026-01-23 11:48:17.204 185177 DEBUG nova.virt.libvirt.driver [None req-bc8342c0-d98b-4c89-9f24-c1d6f7aa7459 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] No VIF found with MAC fa:16:3e:40:4f:a6, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 23 11:48:17 compute-0 nova_compute[185173]: 2026-01-23 11:48:17.204 185177 INFO nova.virt.libvirt.driver [None req-bc8342c0-d98b-4c89-9f24-c1d6f7aa7459 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: 84b3f69a-6ab7-406d-939b-a485518755a5] Using config drive
Jan 23 11:48:17 compute-0 rsyslogd[235472]: message too long (8192) with configured size 8096, begin of message is: 2026-01-23 11:48:17.086 185177 DEBUG nova.virt.libvirt.vif [None req-bc8342c0-d9 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 23 11:48:17 compute-0 rsyslogd[235472]: message too long (8192) with configured size 8096, begin of message is: 2026-01-23 11:48:17.107 185177 DEBUG nova.virt.libvirt.vif [None req-bc8342c0-d9 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 23 11:48:17 compute-0 nova_compute[185173]: 2026-01-23 11:48:17.614 185177 INFO nova.virt.libvirt.driver [None req-bc8342c0-d98b-4c89-9f24-c1d6f7aa7459 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: 84b3f69a-6ab7-406d-939b-a485518755a5] Creating config drive at /var/lib/nova/instances/84b3f69a-6ab7-406d-939b-a485518755a5/disk.config
Jan 23 11:48:17 compute-0 nova_compute[185173]: 2026-01-23 11:48:17.619 185177 DEBUG oslo_concurrency.processutils [None req-bc8342c0-d98b-4c89-9f24-c1d6f7aa7459 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/84b3f69a-6ab7-406d-939b-a485518755a5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpahrwh5u6 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:48:17 compute-0 nova_compute[185173]: 2026-01-23 11:48:17.754 185177 DEBUG oslo_concurrency.processutils [None req-bc8342c0-d98b-4c89-9f24-c1d6f7aa7459 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/84b3f69a-6ab7-406d-939b-a485518755a5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpahrwh5u6" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:48:17 compute-0 NetworkManager[56133]: <info>  [1769168897.8860] manager: (tap05dcc60f-5c): new Tun device (/org/freedesktop/NetworkManager/Devices/28)
Jan 23 11:48:17 compute-0 kernel: tap05dcc60f-5c: entered promiscuous mode
Jan 23 11:48:17 compute-0 nova_compute[185173]: 2026-01-23 11:48:17.890 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:48:17 compute-0 ovn_controller[97581]: 2026-01-23T11:48:17Z|00035|binding|INFO|Claiming lport 05dcc60f-5c09-47f3-9834-3594bf71b68e for this chassis.
Jan 23 11:48:17 compute-0 ovn_controller[97581]: 2026-01-23T11:48:17Z|00036|binding|INFO|05dcc60f-5c09-47f3-9834-3594bf71b68e: Claiming fa:16:3e:40:4f:a6 192.168.0.62
Jan 23 11:48:17 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:48:17.906 106832 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:40:4f:a6 192.168.0.62'], port_security=['fa:16:3e:40:4f:a6 192.168.0.62'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-wvvtbi4gqh4k-vr2au76lt4jq-fptc6vwdy3ol-port-pqbiurkrbamj', 'neutron:cidrs': '192.168.0.62/24', 'neutron:device_id': '84b3f69a-6ab7-406d-939b-a485518755a5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9d2c33ef-0f52-43b5-80dd-899657aece53', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-wvvtbi4gqh4k-vr2au76lt4jq-fptc6vwdy3ol-port-pqbiurkrbamj', 'neutron:project_id': 'bd16a0de2f5e4a8480a855ef0e1a3f14', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd2fa655b-b17a-4411-ab93-c6585edc77dc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.182'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=488b21ee-cabd-4ebf-9089-c8262ea2e5e6, chassis=[<ovs.db.idl.Row object at 0x7fceaba80790>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fceaba80790>], logical_port=05dcc60f-5c09-47f3-9834-3594bf71b68e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 23 11:48:17 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:48:17.908 106832 INFO neutron.agent.ovn.metadata.agent [-] Port 05dcc60f-5c09-47f3-9834-3594bf71b68e in datapath 9d2c33ef-0f52-43b5-80dd-899657aece53 bound to our chassis
Jan 23 11:48:17 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:48:17.910 106832 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9d2c33ef-0f52-43b5-80dd-899657aece53
Jan 23 11:48:17 compute-0 ovn_controller[97581]: 2026-01-23T11:48:17Z|00037|binding|INFO|Setting lport 05dcc60f-5c09-47f3-9834-3594bf71b68e ovn-installed in OVS
Jan 23 11:48:17 compute-0 ovn_controller[97581]: 2026-01-23T11:48:17Z|00038|binding|INFO|Setting lport 05dcc60f-5c09-47f3-9834-3594bf71b68e up in Southbound
Jan 23 11:48:17 compute-0 nova_compute[185173]: 2026-01-23 11:48:17.919 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:48:17 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:48:17.931 238267 DEBUG oslo.privsep.daemon [-] privsep: reply[3d87beb9-3b8d-4135-b478-c0a999859e46]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 11:48:17 compute-0 systemd-machined[156550]: New machine qemu-2-instance-00000002.
Jan 23 11:48:17 compute-0 systemd[1]: Started Virtual Machine qemu-2-instance-00000002.
Jan 23 11:48:17 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:48:17.974 238300 DEBUG oslo.privsep.daemon [-] privsep: reply[a162a1af-d863-43ac-a164-635a57c3558e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 11:48:17 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:48:17.978 238300 DEBUG oslo.privsep.daemon [-] privsep: reply[eeaf7f6d-a38e-4053-a016-e4fe1ac6f88a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 11:48:18 compute-0 systemd-udevd[238902]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 11:48:18 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:48:18.013 238300 DEBUG oslo.privsep.daemon [-] privsep: reply[6cc94013-65f4-4582-913f-16bb69e49dbf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 11:48:18 compute-0 NetworkManager[56133]: <info>  [1769168898.0204] device (tap05dcc60f-5c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 23 11:48:18 compute-0 NetworkManager[56133]: <info>  [1769168898.0259] device (tap05dcc60f-5c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 23 11:48:18 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:48:18.032 238267 DEBUG oslo.privsep.daemon [-] privsep: reply[06d5d881-47ac-458d-ba21-1aeebd05487e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9d2c33ef-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5b:a6:26'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 6, 'rx_bytes': 532, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 6, 'rx_bytes': 532, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 374776, 'reachable_time': 16637, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 238913, 'error': None, 'target': 'ovnmeta-9d2c33ef-0f52-43b5-80dd-899657aece53', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 11:48:18 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:48:18.049 238267 DEBUG oslo.privsep.daemon [-] privsep: reply[49f17c19-5535-4996-951b-022ec0433f6b]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap9d2c33ef-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 374787, 'tstamp': 374787}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 238916, 'error': None, 'target': 'ovnmeta-9d2c33ef-0f52-43b5-80dd-899657aece53', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap9d2c33ef-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 374789, 'tstamp': 374789}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 238916, 'error': None, 'target': 'ovnmeta-9d2c33ef-0f52-43b5-80dd-899657aece53', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 11:48:18 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:48:18.051 106832 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9d2c33ef-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 23 11:48:18 compute-0 nova_compute[185173]: 2026-01-23 11:48:18.053 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:48:18 compute-0 nova_compute[185173]: 2026-01-23 11:48:18.054 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:48:18 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:48:18.054 106832 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9d2c33ef-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 23 11:48:18 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:48:18.055 106832 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 23 11:48:18 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:48:18.055 106832 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9d2c33ef-00, col_values=(('external_ids', {'iface-id': 'a3c84d66-2ae2-461a-92f2-b9999c7b469e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 23 11:48:18 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:48:18.056 106832 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 23 11:48:18 compute-0 podman[238872]: 2026-01-23 11:48:18.059931063 +0000 UTC m=+0.194211937 container health_status 1cc877fed4914980324cf4c0d6ba23743fd113442cee4d49cc1a59e402757170 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_id=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Jan 23 11:48:18 compute-0 nova_compute[185173]: 2026-01-23 11:48:18.159 185177 DEBUG nova.compute.manager [req-c439f0c0-57e5-4ea5-90e2-b05d3fc575bd req-fba9b94b-4812-4f29-bd34-1450f9c747d4 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] [instance: 84b3f69a-6ab7-406d-939b-a485518755a5] Received event network-vif-plugged-05dcc60f-5c09-47f3-9834-3594bf71b68e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 23 11:48:18 compute-0 nova_compute[185173]: 2026-01-23 11:48:18.159 185177 DEBUG oslo_concurrency.lockutils [req-c439f0c0-57e5-4ea5-90e2-b05d3fc575bd req-fba9b94b-4812-4f29-bd34-1450f9c747d4 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] Acquiring lock "84b3f69a-6ab7-406d-939b-a485518755a5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 11:48:18 compute-0 nova_compute[185173]: 2026-01-23 11:48:18.160 185177 DEBUG oslo_concurrency.lockutils [req-c439f0c0-57e5-4ea5-90e2-b05d3fc575bd req-fba9b94b-4812-4f29-bd34-1450f9c747d4 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] Lock "84b3f69a-6ab7-406d-939b-a485518755a5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 11:48:18 compute-0 nova_compute[185173]: 2026-01-23 11:48:18.160 185177 DEBUG oslo_concurrency.lockutils [req-c439f0c0-57e5-4ea5-90e2-b05d3fc575bd req-fba9b94b-4812-4f29-bd34-1450f9c747d4 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] Lock "84b3f69a-6ab7-406d-939b-a485518755a5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 11:48:18 compute-0 nova_compute[185173]: 2026-01-23 11:48:18.160 185177 DEBUG nova.compute.manager [req-c439f0c0-57e5-4ea5-90e2-b05d3fc575bd req-fba9b94b-4812-4f29-bd34-1450f9c747d4 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] [instance: 84b3f69a-6ab7-406d-939b-a485518755a5] Processing event network-vif-plugged-05dcc60f-5c09-47f3-9834-3594bf71b68e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 23 11:48:18 compute-0 nova_compute[185173]: 2026-01-23 11:48:18.259 185177 DEBUG nova.network.neutron [req-f5fa4aca-0ba4-4202-9d25-f1de3a5be02d req-009c3f23-8caa-4568-ae9b-284b56f907bc e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] [instance: 84b3f69a-6ab7-406d-939b-a485518755a5] Updated VIF entry in instance network info cache for port 05dcc60f-5c09-47f3-9834-3594bf71b68e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 23 11:48:18 compute-0 nova_compute[185173]: 2026-01-23 11:48:18.259 185177 DEBUG nova.network.neutron [req-f5fa4aca-0ba4-4202-9d25-f1de3a5be02d req-009c3f23-8caa-4568-ae9b-284b56f907bc e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] [instance: 84b3f69a-6ab7-406d-939b-a485518755a5] Updating instance_info_cache with network_info: [{"id": "05dcc60f-5c09-47f3-9834-3594bf71b68e", "address": "fa:16:3e:40:4f:a6", "network": {"id": "9d2c33ef-0f52-43b5-80dd-899657aece53", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.62", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bd16a0de2f5e4a8480a855ef0e1a3f14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05dcc60f-5c", "ovs_interfaceid": "05dcc60f-5c09-47f3-9834-3594bf71b68e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 23 11:48:18 compute-0 nova_compute[185173]: 2026-01-23 11:48:18.272 185177 DEBUG oslo_concurrency.lockutils [req-f5fa4aca-0ba4-4202-9d25-f1de3a5be02d req-009c3f23-8caa-4568-ae9b-284b56f907bc e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] Releasing lock "refresh_cache-84b3f69a-6ab7-406d-939b-a485518755a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 23 11:48:18 compute-0 nova_compute[185173]: 2026-01-23 11:48:18.304 185177 DEBUG nova.compute.manager [None req-bc8342c0-d98b-4c89-9f24-c1d6f7aa7459 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: 84b3f69a-6ab7-406d-939b-a485518755a5] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 23 11:48:18 compute-0 nova_compute[185173]: 2026-01-23 11:48:18.306 185177 DEBUG nova.virt.driver [None req-161a4fc3-746d-453a-ae54-40285a29550b - - - - - -] Emitting event <LifecycleEvent: 1769168898.3034778, 84b3f69a-6ab7-406d-939b-a485518755a5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 23 11:48:18 compute-0 nova_compute[185173]: 2026-01-23 11:48:18.307 185177 INFO nova.compute.manager [None req-161a4fc3-746d-453a-ae54-40285a29550b - - - - - -] [instance: 84b3f69a-6ab7-406d-939b-a485518755a5] VM Started (Lifecycle Event)
Jan 23 11:48:18 compute-0 nova_compute[185173]: 2026-01-23 11:48:18.315 185177 DEBUG nova.virt.libvirt.driver [None req-bc8342c0-d98b-4c89-9f24-c1d6f7aa7459 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: 84b3f69a-6ab7-406d-939b-a485518755a5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 23 11:48:18 compute-0 nova_compute[185173]: 2026-01-23 11:48:18.323 185177 INFO nova.virt.libvirt.driver [-] [instance: 84b3f69a-6ab7-406d-939b-a485518755a5] Instance spawned successfully.
Jan 23 11:48:18 compute-0 nova_compute[185173]: 2026-01-23 11:48:18.324 185177 DEBUG nova.virt.libvirt.driver [None req-bc8342c0-d98b-4c89-9f24-c1d6f7aa7459 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: 84b3f69a-6ab7-406d-939b-a485518755a5] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 23 11:48:18 compute-0 nova_compute[185173]: 2026-01-23 11:48:18.327 185177 DEBUG nova.compute.manager [None req-161a4fc3-746d-453a-ae54-40285a29550b - - - - - -] [instance: 84b3f69a-6ab7-406d-939b-a485518755a5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 23 11:48:18 compute-0 nova_compute[185173]: 2026-01-23 11:48:18.333 185177 DEBUG nova.compute.manager [None req-161a4fc3-746d-453a-ae54-40285a29550b - - - - - -] [instance: 84b3f69a-6ab7-406d-939b-a485518755a5] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 23 11:48:18 compute-0 nova_compute[185173]: 2026-01-23 11:48:18.345 185177 DEBUG nova.virt.libvirt.driver [None req-bc8342c0-d98b-4c89-9f24-c1d6f7aa7459 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: 84b3f69a-6ab7-406d-939b-a485518755a5] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 23 11:48:18 compute-0 nova_compute[185173]: 2026-01-23 11:48:18.345 185177 DEBUG nova.virt.libvirt.driver [None req-bc8342c0-d98b-4c89-9f24-c1d6f7aa7459 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: 84b3f69a-6ab7-406d-939b-a485518755a5] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 23 11:48:18 compute-0 nova_compute[185173]: 2026-01-23 11:48:18.346 185177 DEBUG nova.virt.libvirt.driver [None req-bc8342c0-d98b-4c89-9f24-c1d6f7aa7459 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: 84b3f69a-6ab7-406d-939b-a485518755a5] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 23 11:48:18 compute-0 nova_compute[185173]: 2026-01-23 11:48:18.346 185177 DEBUG nova.virt.libvirt.driver [None req-bc8342c0-d98b-4c89-9f24-c1d6f7aa7459 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: 84b3f69a-6ab7-406d-939b-a485518755a5] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 23 11:48:18 compute-0 nova_compute[185173]: 2026-01-23 11:48:18.346 185177 DEBUG nova.virt.libvirt.driver [None req-bc8342c0-d98b-4c89-9f24-c1d6f7aa7459 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: 84b3f69a-6ab7-406d-939b-a485518755a5] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 23 11:48:18 compute-0 nova_compute[185173]: 2026-01-23 11:48:18.347 185177 DEBUG nova.virt.libvirt.driver [None req-bc8342c0-d98b-4c89-9f24-c1d6f7aa7459 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: 84b3f69a-6ab7-406d-939b-a485518755a5] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 23 11:48:18 compute-0 nova_compute[185173]: 2026-01-23 11:48:18.351 185177 INFO nova.compute.manager [None req-161a4fc3-746d-453a-ae54-40285a29550b - - - - - -] [instance: 84b3f69a-6ab7-406d-939b-a485518755a5] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 23 11:48:18 compute-0 nova_compute[185173]: 2026-01-23 11:48:18.352 185177 DEBUG nova.virt.driver [None req-161a4fc3-746d-453a-ae54-40285a29550b - - - - - -] Emitting event <LifecycleEvent: 1769168898.303884, 84b3f69a-6ab7-406d-939b-a485518755a5 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 23 11:48:18 compute-0 nova_compute[185173]: 2026-01-23 11:48:18.352 185177 INFO nova.compute.manager [None req-161a4fc3-746d-453a-ae54-40285a29550b - - - - - -] [instance: 84b3f69a-6ab7-406d-939b-a485518755a5] VM Paused (Lifecycle Event)
Jan 23 11:48:18 compute-0 nova_compute[185173]: 2026-01-23 11:48:18.378 185177 DEBUG nova.compute.manager [None req-161a4fc3-746d-453a-ae54-40285a29550b - - - - - -] [instance: 84b3f69a-6ab7-406d-939b-a485518755a5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 23 11:48:18 compute-0 nova_compute[185173]: 2026-01-23 11:48:18.384 185177 DEBUG nova.virt.driver [None req-161a4fc3-746d-453a-ae54-40285a29550b - - - - - -] Emitting event <LifecycleEvent: 1769168898.3139424, 84b3f69a-6ab7-406d-939b-a485518755a5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 23 11:48:18 compute-0 nova_compute[185173]: 2026-01-23 11:48:18.384 185177 INFO nova.compute.manager [None req-161a4fc3-746d-453a-ae54-40285a29550b - - - - - -] [instance: 84b3f69a-6ab7-406d-939b-a485518755a5] VM Resumed (Lifecycle Event)
Jan 23 11:48:18 compute-0 nova_compute[185173]: 2026-01-23 11:48:18.405 185177 INFO nova.compute.manager [None req-bc8342c0-d98b-4c89-9f24-c1d6f7aa7459 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: 84b3f69a-6ab7-406d-939b-a485518755a5] Took 4.34 seconds to spawn the instance on the hypervisor.
Jan 23 11:48:18 compute-0 nova_compute[185173]: 2026-01-23 11:48:18.405 185177 DEBUG nova.compute.manager [None req-bc8342c0-d98b-4c89-9f24-c1d6f7aa7459 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: 84b3f69a-6ab7-406d-939b-a485518755a5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 23 11:48:18 compute-0 nova_compute[185173]: 2026-01-23 11:48:18.524 185177 DEBUG nova.compute.manager [None req-161a4fc3-746d-453a-ae54-40285a29550b - - - - - -] [instance: 84b3f69a-6ab7-406d-939b-a485518755a5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 23 11:48:18 compute-0 nova_compute[185173]: 2026-01-23 11:48:18.537 185177 DEBUG nova.compute.manager [None req-161a4fc3-746d-453a-ae54-40285a29550b - - - - - -] [instance: 84b3f69a-6ab7-406d-939b-a485518755a5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 23 11:48:18 compute-0 nova_compute[185173]: 2026-01-23 11:48:18.557 185177 INFO nova.compute.manager [None req-161a4fc3-746d-453a-ae54-40285a29550b - - - - - -] [instance: 84b3f69a-6ab7-406d-939b-a485518755a5] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 23 11:48:18 compute-0 nova_compute[185173]: 2026-01-23 11:48:18.599 185177 INFO nova.compute.manager [None req-bc8342c0-d98b-4c89-9f24-c1d6f7aa7459 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: 84b3f69a-6ab7-406d-939b-a485518755a5] Took 5.01 seconds to build instance.
Jan 23 11:48:18 compute-0 nova_compute[185173]: 2026-01-23 11:48:18.613 185177 DEBUG oslo_concurrency.lockutils [None req-bc8342c0-d98b-4c89-9f24-c1d6f7aa7459 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Lock "84b3f69a-6ab7-406d-939b-a485518755a5" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.106s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 11:48:20 compute-0 nova_compute[185173]: 2026-01-23 11:48:20.275 185177 DEBUG nova.compute.manager [req-435cc900-e544-4d6e-b102-e83586b83402 req-01c55e33-3fc4-4dae-8374-36b56305c471 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] [instance: 84b3f69a-6ab7-406d-939b-a485518755a5] Received event network-vif-plugged-05dcc60f-5c09-47f3-9834-3594bf71b68e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 23 11:48:20 compute-0 nova_compute[185173]: 2026-01-23 11:48:20.275 185177 DEBUG oslo_concurrency.lockutils [req-435cc900-e544-4d6e-b102-e83586b83402 req-01c55e33-3fc4-4dae-8374-36b56305c471 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] Acquiring lock "84b3f69a-6ab7-406d-939b-a485518755a5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 11:48:20 compute-0 nova_compute[185173]: 2026-01-23 11:48:20.275 185177 DEBUG oslo_concurrency.lockutils [req-435cc900-e544-4d6e-b102-e83586b83402 req-01c55e33-3fc4-4dae-8374-36b56305c471 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] Lock "84b3f69a-6ab7-406d-939b-a485518755a5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 11:48:20 compute-0 nova_compute[185173]: 2026-01-23 11:48:20.276 185177 DEBUG oslo_concurrency.lockutils [req-435cc900-e544-4d6e-b102-e83586b83402 req-01c55e33-3fc4-4dae-8374-36b56305c471 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] Lock "84b3f69a-6ab7-406d-939b-a485518755a5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 11:48:20 compute-0 nova_compute[185173]: 2026-01-23 11:48:20.276 185177 DEBUG nova.compute.manager [req-435cc900-e544-4d6e-b102-e83586b83402 req-01c55e33-3fc4-4dae-8374-36b56305c471 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] [instance: 84b3f69a-6ab7-406d-939b-a485518755a5] No waiting events found dispatching network-vif-plugged-05dcc60f-5c09-47f3-9834-3594bf71b68e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 23 11:48:20 compute-0 nova_compute[185173]: 2026-01-23 11:48:20.276 185177 WARNING nova.compute.manager [req-435cc900-e544-4d6e-b102-e83586b83402 req-01c55e33-3fc4-4dae-8374-36b56305c471 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] [instance: 84b3f69a-6ab7-406d-939b-a485518755a5] Received unexpected event network-vif-plugged-05dcc60f-5c09-47f3-9834-3594bf71b68e for instance with vm_state active and task_state None.
Jan 23 11:48:21 compute-0 nova_compute[185173]: 2026-01-23 11:48:21.427 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:48:22 compute-0 nova_compute[185173]: 2026-01-23 11:48:22.122 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:48:24 compute-0 podman[238927]: 2026-01-23 11:48:24.771940009 +0000 UTC m=+0.097940138 container health_status adf529ba1b6aae11f18bcfacdd7f5850af0b6e6af2250d4a705be9c346f3f5af (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_ipmi, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 23 11:48:26 compute-0 nova_compute[185173]: 2026-01-23 11:48:26.430 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:48:26 compute-0 podman[238947]: 2026-01-23 11:48:26.739059663 +0000 UTC m=+0.075901014 container health_status 900ef841977ab427bb05b895d10e0cac749b9185cccc7bb7aaf2b3886aa6449a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, config_id=kepler, release-0.7.12=, container_name=kepler, distribution-scope=public, name=ubi9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, com.redhat.component=ubi9-container, release=1214.1726694543, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.expose-services=, build-date=2024-09-18T21:23:30)
Jan 23 11:48:27 compute-0 nova_compute[185173]: 2026-01-23 11:48:27.125 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:48:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:48:29.091 106832 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 11:48:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:48:29.093 106832 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 11:48:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:48:29.094 106832 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 11:48:29 compute-0 podman[201022]: time="2026-01-23T11:48:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 23 11:48:29 compute-0 podman[201022]: @ - - [23/Jan/2026:11:48:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28508 "" "Go-http-client/1.1"
Jan 23 11:48:29 compute-0 podman[201022]: @ - - [23/Jan/2026:11:48:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4352 "" "Go-http-client/1.1"
Jan 23 11:48:30 compute-0 podman[238967]: 2026-01-23 11:48:30.758946456 +0000 UTC m=+0.083417390 container health_status 99ee297e6e25b500e7af118e58bbafc761d2fd7202cdfcf4c976c2a99866b5ef (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 23 11:48:31 compute-0 openstack_network_exporter[204160]: ERROR   11:48:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 23 11:48:31 compute-0 openstack_network_exporter[204160]: 
Jan 23 11:48:31 compute-0 openstack_network_exporter[204160]: ERROR   11:48:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 23 11:48:31 compute-0 openstack_network_exporter[204160]: 
Jan 23 11:48:31 compute-0 nova_compute[185173]: 2026-01-23 11:48:31.431 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:48:32 compute-0 nova_compute[185173]: 2026-01-23 11:48:32.129 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:48:35 compute-0 podman[238989]: 2026-01-23 11:48:35.73685263 +0000 UTC m=+0.072047569 container health_status cde20f10ae383cce1365a41265bac0a75ea71c31a21a1539f187bef9d678e8d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, version=9.6, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, config_id=openstack_network_exporter, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., name=ubi9-minimal, release=1755695350, io.openshift.expose-services=, vcs-type=git, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, distribution-scope=public)
Jan 23 11:48:36 compute-0 nova_compute[185173]: 2026-01-23 11:48:36.432 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:48:37 compute-0 nova_compute[185173]: 2026-01-23 11:48:37.133 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:48:41 compute-0 nova_compute[185173]: 2026-01-23 11:48:41.435 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:48:42 compute-0 nova_compute[185173]: 2026-01-23 11:48:42.135 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:48:44 compute-0 podman[239009]: 2026-01-23 11:48:44.727534824 +0000 UTC m=+0.058147276 container health_status 48bfd3e93cfb033a8917f154ab637a84f3f60f7609564292c230ce848bae7693 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 23 11:48:44 compute-0 podman[239010]: 2026-01-23 11:48:44.739252333 +0000 UTC m=+0.064472182 container health_status 6ec039018dddd109dd56b3f3912ce4a80c166b5fb98c417c5e3cfbbdfbfbeaad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260120, config_id=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 23 11:48:46 compute-0 nova_compute[185173]: 2026-01-23 11:48:46.437 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:48:46 compute-0 podman[239050]: 2026-01-23 11:48:46.74012043 +0000 UTC m=+0.072384387 container health_status d96827cd9c29e53bbdf4cef10942608e4ba405294733072b4aa624c0238e2ed8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 11:48:47 compute-0 nova_compute[185173]: 2026-01-23 11:48:47.138 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:48:47 compute-0 ovn_controller[97581]: 2026-01-23T11:48:47Z|00039|memory_trim|INFO|Detected inactivity (last active 30019 ms ago): trimming memory
Jan 23 11:48:48 compute-0 podman[239069]: 2026-01-23 11:48:48.788230644 +0000 UTC m=+0.116350272 container health_status 1cc877fed4914980324cf4c0d6ba23743fd113442cee4d49cc1a59e402757170 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 23 11:48:49 compute-0 ovn_controller[97581]: 2026-01-23T11:48:49Z|00006|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:40:4f:a6 192.168.0.62
Jan 23 11:48:49 compute-0 ovn_controller[97581]: 2026-01-23T11:48:49Z|00007|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:40:4f:a6 192.168.0.62
Jan 23 11:48:51 compute-0 nova_compute[185173]: 2026-01-23 11:48:51.440 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:48:52 compute-0 nova_compute[185173]: 2026-01-23 11:48:52.143 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:48:55 compute-0 podman[239108]: 2026-01-23 11:48:55.754159779 +0000 UTC m=+0.078469917 container health_status adf529ba1b6aae11f18bcfacdd7f5850af0b6e6af2250d4a705be9c346f3f5af (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 23 11:48:56 compute-0 nova_compute[185173]: 2026-01-23 11:48:56.441 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:48:57 compute-0 nova_compute[185173]: 2026-01-23 11:48:57.146 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:48:57 compute-0 podman[239127]: 2026-01-23 11:48:57.752442613 +0000 UTC m=+0.076832197 container health_status 900ef841977ab427bb05b895d10e0cac749b9185cccc7bb7aaf2b3886aa6449a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, vendor=Red Hat, Inc., config_id=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.tags=base rhel9, version=9.4, architecture=x86_64, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1214.1726694543, io.buildah.version=1.29.0, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, container_name=kepler, release-0.7.12=, maintainer=Red Hat, Inc.)
Jan 23 11:48:59 compute-0 podman[201022]: time="2026-01-23T11:48:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 23 11:48:59 compute-0 podman[201022]: @ - - [23/Jan/2026:11:48:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28508 "" "Go-http-client/1.1"
Jan 23 11:48:59 compute-0 podman[201022]: @ - - [23/Jan/2026:11:48:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4363 "" "Go-http-client/1.1"
Jan 23 11:49:01 compute-0 openstack_network_exporter[204160]: ERROR   11:49:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 23 11:49:01 compute-0 openstack_network_exporter[204160]: 
Jan 23 11:49:01 compute-0 openstack_network_exporter[204160]: ERROR   11:49:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 23 11:49:01 compute-0 openstack_network_exporter[204160]: 
Jan 23 11:49:01 compute-0 nova_compute[185173]: 2026-01-23 11:49:01.443 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:49:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:01.451 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 23 11:49:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:01.452 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 23 11:49:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:01.452 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283bb63320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:49:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:01.453 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f28410bc7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:49:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:01.453 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be810>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283bb63320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:49:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:01.454 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be840>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283bb63320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:49:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:01.455 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283bb63320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:49:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:01.455 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be8a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283bb63320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:49:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:01.455 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc8f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283bb63320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:49:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:01.455 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be900>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283bb63320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:49:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:01.455 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bf140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283bb63320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:49:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:01.455 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be960>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283bb63320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:49:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:01.455 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2842f61190>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283bb63320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:49:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:01.455 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28411c9190>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283bb63320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:49:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:01.455 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be9c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283bb63320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:49:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:01.455 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bf1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283bb63320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:49:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:01.456 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bec00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283bb63320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:49:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:01.456 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bf440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283bb63320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:49:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:01.456 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bec60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283bb63320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:49:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:01.456 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2842f83560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283bb63320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:49:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:01.456 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283bb63320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:49:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:01.457 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc5c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283bb63320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:49:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:01.457 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283bb63320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:49:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:01.457 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be660>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283bb63320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:49:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:01.457 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283bb63320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:49:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:01.458 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283bb63320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:49:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:01.458 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2842f1af60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283bb63320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:49:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:01.458 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283bb63320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:49:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:01.458 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be7b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283bb63320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:49:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:01.460 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '55846fbf-a87a-4cba-be0b-23125d3d9ef4', 'name': 'test_0', 'flavor': {'id': 'f2c5c5dd-a580-4885-a3ab-a766eac401c8', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'c5833e41-b4db-454e-8f49-014aa18c7dc5'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'bd16a0de2f5e4a8480a855ef0e1a3f14', 'user_id': 'd9858533c2284846a8f0f19a1fb45045', 'hostId': '47f89b8956aaa9163f724166aabd4216eadbb2bd951d24f4c87e1ecb', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 23 11:49:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:01.463 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 84b3f69a-6ab7-406d-939b-a485518755a5 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Jan 23 11:49:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:01.464 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/84b3f69a-6ab7-406d-939b-a485518755a5 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}ad70b57d9194f6532b182b578b16289681d355eb6a1afd27a70859dd1387cbc9" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Jan 23 11:49:01 compute-0 podman[239148]: 2026-01-23 11:49:01.791426637 +0000 UTC m=+0.096531323 container health_status 99ee297e6e25b500e7af118e58bbafc761d2fd7202cdfcf4c976c2a99866b5ef (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 23 11:49:02 compute-0 nova_compute[185173]: 2026-01-23 11:49:02.148 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.250 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1959 Content-Type: application/json Date: Fri, 23 Jan 2026 11:49:01 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-38fadef7-560f-4cd4-a14a-b5ab19c8b14a x-openstack-request-id: req-38fadef7-560f-4cd4-a14a-b5ab19c8b14a _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.250 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "84b3f69a-6ab7-406d-939b-a485518755a5", "name": "vn-i4gqh4k-vr2au76lt4jq-fptc6vwdy3ol-vnf-bciscawcuiyk", "status": "ACTIVE", "tenant_id": "bd16a0de2f5e4a8480a855ef0e1a3f14", "user_id": "d9858533c2284846a8f0f19a1fb45045", "metadata": {"metering.server_group": "500baa09-1e39-474e-b275-8b2dffe3a65b"}, "hostId": "47f89b8956aaa9163f724166aabd4216eadbb2bd951d24f4c87e1ecb", "image": {"id": "c5833e41-b4db-454e-8f49-014aa18c7dc5", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/c5833e41-b4db-454e-8f49-014aa18c7dc5"}]}, "flavor": {"id": "f2c5c5dd-a580-4885-a3ab-a766eac401c8", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/f2c5c5dd-a580-4885-a3ab-a766eac401c8"}]}, "created": "2026-01-23T11:48:11Z", "updated": "2026-01-23T11:48:18Z", "addresses": {"private": [{"version": 4, "addr": "192.168.0.62", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:40:4f:a6"}, {"version": 4, "addr": "192.168.122.182", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:40:4f:a6"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/84b3f69a-6ab7-406d-939b-a485518755a5"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/84b3f69a-6ab7-406d-939b-a485518755a5"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2026-01-23T11:48:18.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "basic"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000002", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.251 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/84b3f69a-6ab7-406d-939b-a485518755a5 used request id req-38fadef7-560f-4cd4-a14a-b5ab19c8b14a request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.252 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '84b3f69a-6ab7-406d-939b-a485518755a5', 'name': 'vn-i4gqh4k-vr2au76lt4jq-fptc6vwdy3ol-vnf-bciscawcuiyk', 'flavor': {'id': 'f2c5c5dd-a580-4885-a3ab-a766eac401c8', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'c5833e41-b4db-454e-8f49-014aa18c7dc5'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'bd16a0de2f5e4a8480a855ef0e1a3f14', 'user_id': 'd9858533c2284846a8f0f19a1fb45045', 'hostId': '47f89b8956aaa9163f724166aabd4216eadbb2bd951d24f4c87e1ecb', 'status': 'active', 'metadata': {'metering.server_group': '500baa09-1e39-474e-b275-8b2dffe3a65b'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.253 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.253 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bc800>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.253 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bc800>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.254 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.255 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-01-23T11:49:03.254259) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.261 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 55846fbf-a87a-4cba-be0b-23125d3d9ef4 / tap4c18896b-ec inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.261 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.266 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 84b3f69a-6ab7-406d-939b-a485518755a5 / tap05dcc60f-5c inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.266 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.267 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.267 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f28410be7e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.268 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.268 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410be810>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.268 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410be810>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.269 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.269 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-01-23T11:49:03.269243) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.299 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.300 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.300 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.332 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.332 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.333 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.333 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.333 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f28411c9b80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.333 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.334 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410be840>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.334 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410be840>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.334 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.334 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-01-23T11:49:03.334212) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.405 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.405 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.405 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.468 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.write.bytes volume: 41697280 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.469 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.469 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.470 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.470 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f28410bc830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.470 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.470 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bc860>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.470 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bc860>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.470 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.470 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.470 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: vn-i4gqh4k-vr2au76lt4jq-fptc6vwdy3ol-vnf-bciscawcuiyk>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-i4gqh4k-vr2au76lt4jq-fptc6vwdy3ol-vnf-bciscawcuiyk>]
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.471 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f28410be870>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.471 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.471 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410be8a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.471 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410be8a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.471 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.471 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.write.latency volume: 1669208630 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.472 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2026-01-23T11:49:03.470601) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.472 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.write.latency volume: 8106790 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.472 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-01-23T11:49:03.471787) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.472 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.472 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.write.latency volume: 784401198 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.472 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.write.latency volume: 8862519 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.472 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.473 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.473 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f28410bc8c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.473 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.473 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bc8f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.473 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bc8f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.473 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.473 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.474 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-01-23T11:49:03.473843) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.474 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.474 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.474 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f28410be8d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.475 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.475 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410be900>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.475 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410be900>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.475 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.475 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.write.requests volume: 234 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.475 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.475 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.476 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.write.requests volume: 221 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.476 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.476 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.476 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-01-23T11:49:03.475330) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.476 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.477 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f28410bef30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.477 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.477 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bf140>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.477 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bf140>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.477 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.477 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.478 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-01-23T11:49:03.477726) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.478 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.478 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.478 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f28410be930>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.478 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.478 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410be960>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.478 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410be960>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.479 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.479 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.479 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-01-23T11:49:03.479031) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.479 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f28410be750>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.479 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.479 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f2842f61190>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.479 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f2842f61190>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.480 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.480 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.read.latency volume: 639933059 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.480 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-01-23T11:49:03.480052) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.480 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.read.latency volume: 72530295 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.480 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.read.latency volume: 43879093 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.480 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.read.latency volume: 359401908 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.481 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.read.latency volume: 61167194 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.481 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.read.latency volume: 48392812 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.481 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.481 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f28411a4c50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.481 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.482 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28411c9190>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.482 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28411c9190>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.482 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.482 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.483 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.483 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.483 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-01-23T11:49:03.482192) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.484 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.484 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.484 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.484 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.485 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f28410be990>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.485 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.485 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410be9c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.485 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410be9c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.485 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.485 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.485 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f28410bf1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.486 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.486 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bf1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.486 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bf1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.486 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.486 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.486 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.487 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.487 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f28410bebd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.487 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.487 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bec00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.487 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bec00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.487 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.487 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-01-23T11:49:03.485483) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.488 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-01-23T11:49:03.486436) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.488 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-01-23T11:49:03.487587) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.510 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/memory.usage volume: 48.9140625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.537 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/memory.usage volume: 49.6015625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.537 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.538 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f28410bf410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.538 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.538 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bf440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.538 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bf440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.538 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.539 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.539 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.539 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.540 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f28410bec30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.540 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.540 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-01-23T11:49:03.538802) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.540 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bec60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.540 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bec60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.540 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.541 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-01-23T11:49:03.540820) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.541 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/network.incoming.bytes volume: 1968 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.541 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/network.incoming.bytes volume: 1486 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.542 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.542 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f28410bcfb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.542 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.542 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f2842f83560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.542 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f2842f83560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.542 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.542 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/cpu volume: 33670000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.543 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/cpu volume: 30850000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.543 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-01-23T11:49:03.542678) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.543 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.543 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f28410bc920>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.544 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.544 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bc590>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.544 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bc590>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.544 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.544 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.544 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: vn-i4gqh4k-vr2au76lt4jq-fptc6vwdy3ol-vnf-bciscawcuiyk>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-i4gqh4k-vr2au76lt4jq-fptc6vwdy3ol-vnf-bciscawcuiyk>]
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.545 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f28410bc5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.545 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.545 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bc5c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.545 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bc5c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.545 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.546 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/network.incoming.packets volume: 17 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.546 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/network.incoming.packets volume: 12 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.546 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2026-01-23T11:49:03.544562) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.547 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-01-23T11:49:03.545930) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.547 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.547 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f28410bc890>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.547 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.547 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bc650>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.547 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bc650>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.547 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.548 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.548 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-01-23T11:49:03.547842) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.548 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.549 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.549 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f28410be720>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.549 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.549 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410be660>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.549 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410be660>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.549 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.549 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.550 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.550 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.550 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-01-23T11:49:03.549684) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.551 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.551 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.551 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.552 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.552 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f28410bc6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.552 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.552 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bc680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.553 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bc680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.553 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.553 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/network.outgoing.packets volume: 20 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.553 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/network.outgoing.packets volume: 15 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.554 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-01-23T11:49:03.553126) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.554 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.554 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f28410bec90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.554 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.555 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bc6e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.555 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bc6e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.555 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.555 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.555 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.556 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.556 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-01-23T11:49:03.555249) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.556 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f284322b260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.557 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.557 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f2842f1af60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.557 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f2842f1af60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.557 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-01-23T11:49:03.557500) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.557 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.557 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.558 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.558 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.558 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.559 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.559 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.560 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.560 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f28410bc740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.560 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.560 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bc770>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.560 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bc770>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.560 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.560 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/network.outgoing.bytes volume: 2132 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.561 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/network.outgoing.bytes volume: 1821 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.561 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.562 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-01-23T11:49:03.560819) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.562 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f28410be780>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.562 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.562 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410be7b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.562 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410be7b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.563 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.563 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.563 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.563 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.564 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.564 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.565 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.565 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-01-23T11:49:03.562968) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.566 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.566 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.566 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.566 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.567 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.567 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.567 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.567 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.567 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.567 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.567 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.567 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.567 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.567 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.568 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.568 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.568 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.568 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.568 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.568 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.568 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.568 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.568 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.568 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.569 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.569 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:49:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:49:03.569 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:49:06 compute-0 nova_compute[185173]: 2026-01-23 11:49:06.446 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:49:06 compute-0 podman[239171]: 2026-01-23 11:49:06.793146561 +0000 UTC m=+0.125787536 container health_status cde20f10ae383cce1365a41265bac0a75ea71c31a21a1539f187bef9d678e8d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=openstack_network_exporter, maintainer=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.buildah.version=1.33.7, name=ubi9-minimal, release=1755695350, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, vcs-type=git, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64)
Jan 23 11:49:07 compute-0 nova_compute[185173]: 2026-01-23 11:49:07.151 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:49:07 compute-0 nova_compute[185173]: 2026-01-23 11:49:07.236 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:49:08 compute-0 nova_compute[185173]: 2026-01-23 11:49:08.230 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:49:08 compute-0 nova_compute[185173]: 2026-01-23 11:49:08.234 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:49:09 compute-0 nova_compute[185173]: 2026-01-23 11:49:09.235 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:49:09 compute-0 nova_compute[185173]: 2026-01-23 11:49:09.235 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:49:09 compute-0 nova_compute[185173]: 2026-01-23 11:49:09.268 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 11:49:09 compute-0 nova_compute[185173]: 2026-01-23 11:49:09.269 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 11:49:09 compute-0 nova_compute[185173]: 2026-01-23 11:49:09.269 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 11:49:09 compute-0 nova_compute[185173]: 2026-01-23 11:49:09.269 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 23 11:49:09 compute-0 nova_compute[185173]: 2026-01-23 11:49:09.450 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:49:09 compute-0 nova_compute[185173]: 2026-01-23 11:49:09.510 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:49:09 compute-0 nova_compute[185173]: 2026-01-23 11:49:09.511 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:49:09 compute-0 nova_compute[185173]: 2026-01-23 11:49:09.569 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:49:09 compute-0 nova_compute[185173]: 2026-01-23 11:49:09.570 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:49:09 compute-0 nova_compute[185173]: 2026-01-23 11:49:09.640 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.eph0 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:49:09 compute-0 nova_compute[185173]: 2026-01-23 11:49:09.641 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:49:09 compute-0 nova_compute[185173]: 2026-01-23 11:49:09.716 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.eph0 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:49:09 compute-0 nova_compute[185173]: 2026-01-23 11:49:09.724 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/84b3f69a-6ab7-406d-939b-a485518755a5/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:49:09 compute-0 nova_compute[185173]: 2026-01-23 11:49:09.785 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/84b3f69a-6ab7-406d-939b-a485518755a5/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:49:09 compute-0 nova_compute[185173]: 2026-01-23 11:49:09.786 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/84b3f69a-6ab7-406d-939b-a485518755a5/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:49:09 compute-0 nova_compute[185173]: 2026-01-23 11:49:09.850 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/84b3f69a-6ab7-406d-939b-a485518755a5/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:49:09 compute-0 nova_compute[185173]: 2026-01-23 11:49:09.851 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/84b3f69a-6ab7-406d-939b-a485518755a5/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:49:09 compute-0 nova_compute[185173]: 2026-01-23 11:49:09.912 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/84b3f69a-6ab7-406d-939b-a485518755a5/disk.eph0 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:49:09 compute-0 nova_compute[185173]: 2026-01-23 11:49:09.913 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/84b3f69a-6ab7-406d-939b-a485518755a5/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:49:09 compute-0 nova_compute[185173]: 2026-01-23 11:49:09.971 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/84b3f69a-6ab7-406d-939b-a485518755a5/disk.eph0 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:49:10 compute-0 nova_compute[185173]: 2026-01-23 11:49:10.288 185177 WARNING nova.virt.libvirt.driver [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 23 11:49:10 compute-0 nova_compute[185173]: 2026-01-23 11:49:10.289 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5021MB free_disk=72.40071105957031GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 23 11:49:10 compute-0 nova_compute[185173]: 2026-01-23 11:49:10.290 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 11:49:10 compute-0 nova_compute[185173]: 2026-01-23 11:49:10.290 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 11:49:10 compute-0 nova_compute[185173]: 2026-01-23 11:49:10.486 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Instance 55846fbf-a87a-4cba-be0b-23125d3d9ef4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 23 11:49:10 compute-0 nova_compute[185173]: 2026-01-23 11:49:10.486 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Instance 84b3f69a-6ab7-406d-939b-a485518755a5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 23 11:49:10 compute-0 nova_compute[185173]: 2026-01-23 11:49:10.487 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 23 11:49:10 compute-0 nova_compute[185173]: 2026-01-23 11:49:10.487 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 23 11:49:10 compute-0 nova_compute[185173]: 2026-01-23 11:49:10.657 185177 DEBUG nova.compute.provider_tree [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Inventory has not changed in ProviderTree for provider: 77dd020c-2f5c-40b0-b660-8a95a28aabbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 23 11:49:10 compute-0 nova_compute[185173]: 2026-01-23 11:49:10.673 185177 DEBUG nova.scheduler.client.report [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Inventory has not changed for provider 77dd020c-2f5c-40b0-b660-8a95a28aabbd based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 23 11:49:10 compute-0 nova_compute[185173]: 2026-01-23 11:49:10.695 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 23 11:49:10 compute-0 nova_compute[185173]: 2026-01-23 11:49:10.696 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.406s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 11:49:11 compute-0 nova_compute[185173]: 2026-01-23 11:49:11.449 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:49:11 compute-0 nova_compute[185173]: 2026-01-23 11:49:11.697 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:49:12 compute-0 nova_compute[185173]: 2026-01-23 11:49:12.154 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:49:12 compute-0 nova_compute[185173]: 2026-01-23 11:49:12.235 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:49:12 compute-0 nova_compute[185173]: 2026-01-23 11:49:12.235 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 23 11:49:12 compute-0 nova_compute[185173]: 2026-01-23 11:49:12.236 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 23 11:49:12 compute-0 nova_compute[185173]: 2026-01-23 11:49:12.659 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Acquiring lock "refresh_cache-55846fbf-a87a-4cba-be0b-23125d3d9ef4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 23 11:49:12 compute-0 nova_compute[185173]: 2026-01-23 11:49:12.659 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Acquired lock "refresh_cache-55846fbf-a87a-4cba-be0b-23125d3d9ef4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 23 11:49:12 compute-0 nova_compute[185173]: 2026-01-23 11:49:12.660 185177 DEBUG nova.network.neutron [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] [instance: 55846fbf-a87a-4cba-be0b-23125d3d9ef4] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 23 11:49:12 compute-0 nova_compute[185173]: 2026-01-23 11:49:12.660 185177 DEBUG nova.objects.instance [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 55846fbf-a87a-4cba-be0b-23125d3d9ef4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 23 11:49:13 compute-0 nova_compute[185173]: 2026-01-23 11:49:13.981 185177 DEBUG nova.network.neutron [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] [instance: 55846fbf-a87a-4cba-be0b-23125d3d9ef4] Updating instance_info_cache with network_info: [{"id": "4c18896b-ecf0-4d1b-b901-f24edce45c11", "address": "fa:16:3e:e4:21:a1", "network": {"id": "9d2c33ef-0f52-43b5-80dd-899657aece53", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.65", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bd16a0de2f5e4a8480a855ef0e1a3f14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c18896b-ec", "ovs_interfaceid": "4c18896b-ecf0-4d1b-b901-f24edce45c11", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 23 11:49:14 compute-0 nova_compute[185173]: 2026-01-23 11:49:14.005 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Releasing lock "refresh_cache-55846fbf-a87a-4cba-be0b-23125d3d9ef4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 23 11:49:14 compute-0 nova_compute[185173]: 2026-01-23 11:49:14.006 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] [instance: 55846fbf-a87a-4cba-be0b-23125d3d9ef4] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 23 11:49:14 compute-0 nova_compute[185173]: 2026-01-23 11:49:14.008 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:49:14 compute-0 nova_compute[185173]: 2026-01-23 11:49:14.008 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:49:14 compute-0 nova_compute[185173]: 2026-01-23 11:49:14.008 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 23 11:49:15 compute-0 podman[239215]: 2026-01-23 11:49:15.744062262 +0000 UTC m=+0.066975654 container health_status 48bfd3e93cfb033a8917f154ab637a84f3f60f7609564292c230ce848bae7693 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 23 11:49:15 compute-0 podman[239216]: 2026-01-23 11:49:15.757410511 +0000 UTC m=+0.076948470 container health_status 6ec039018dddd109dd56b3f3912ce4a80c166b5fb98c417c5e3cfbbdfbfbeaad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20260120, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image)
Jan 23 11:49:16 compute-0 nova_compute[185173]: 2026-01-23 11:49:16.453 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:49:17 compute-0 nova_compute[185173]: 2026-01-23 11:49:17.156 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:49:17 compute-0 podman[239255]: 2026-01-23 11:49:17.795503478 +0000 UTC m=+0.114588329 container health_status d96827cd9c29e53bbdf4cef10942608e4ba405294733072b4aa624c0238e2ed8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Jan 23 11:49:19 compute-0 podman[239274]: 2026-01-23 11:49:19.792024128 +0000 UTC m=+0.122946625 container health_status 1cc877fed4914980324cf4c0d6ba23743fd113442cee4d49cc1a59e402757170 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 23 11:49:21 compute-0 nova_compute[185173]: 2026-01-23 11:49:21.456 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:49:22 compute-0 nova_compute[185173]: 2026-01-23 11:49:22.158 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:49:26 compute-0 nova_compute[185173]: 2026-01-23 11:49:26.458 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:49:26 compute-0 podman[239300]: 2026-01-23 11:49:26.734611328 +0000 UTC m=+0.065844256 container health_status adf529ba1b6aae11f18bcfacdd7f5850af0b6e6af2250d4a705be9c346f3f5af (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 11:49:27 compute-0 nova_compute[185173]: 2026-01-23 11:49:27.161 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:49:28 compute-0 podman[239318]: 2026-01-23 11:49:28.730497104 +0000 UTC m=+0.059603222 container health_status 900ef841977ab427bb05b895d10e0cac749b9185cccc7bb7aaf2b3886aa6449a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, name=ubi9, vcs-type=git, config_id=kepler, architecture=x86_64, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container, release-0.7.12=, distribution-scope=public, managed_by=edpm_ansible, release=1214.1726694543, build-date=2024-09-18T21:23:30, container_name=kepler, vendor=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 23 11:49:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:49:29.093 106832 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 11:49:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:49:29.094 106832 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 11:49:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:49:29.095 106832 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 11:49:29 compute-0 podman[201022]: time="2026-01-23T11:49:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 23 11:49:29 compute-0 podman[201022]: @ - - [23/Jan/2026:11:49:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28508 "" "Go-http-client/1.1"
Jan 23 11:49:29 compute-0 podman[201022]: @ - - [23/Jan/2026:11:49:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4365 "" "Go-http-client/1.1"
Jan 23 11:49:31 compute-0 openstack_network_exporter[204160]: ERROR   11:49:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 23 11:49:31 compute-0 openstack_network_exporter[204160]: 
Jan 23 11:49:31 compute-0 openstack_network_exporter[204160]: ERROR   11:49:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 23 11:49:31 compute-0 openstack_network_exporter[204160]: 
Jan 23 11:49:31 compute-0 nova_compute[185173]: 2026-01-23 11:49:31.460 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:49:32 compute-0 nova_compute[185173]: 2026-01-23 11:49:32.164 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:49:32 compute-0 podman[239338]: 2026-01-23 11:49:32.776028069 +0000 UTC m=+0.097942648 container health_status 99ee297e6e25b500e7af118e58bbafc761d2fd7202cdfcf4c976c2a99866b5ef (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 23 11:49:36 compute-0 nova_compute[185173]: 2026-01-23 11:49:36.463 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:49:37 compute-0 nova_compute[185173]: 2026-01-23 11:49:37.167 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:49:37 compute-0 podman[239362]: 2026-01-23 11:49:37.776939332 +0000 UTC m=+0.094511404 container health_status cde20f10ae383cce1365a41265bac0a75ea71c31a21a1539f187bef9d678e8d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, io.buildah.version=1.33.7, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., version=9.6, build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, vcs-type=git, io.openshift.tags=minimal rhel9, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, config_id=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc.)
Jan 23 11:49:41 compute-0 nova_compute[185173]: 2026-01-23 11:49:41.464 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:49:42 compute-0 nova_compute[185173]: 2026-01-23 11:49:42.171 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:49:46 compute-0 nova_compute[185173]: 2026-01-23 11:49:46.467 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:49:46 compute-0 podman[239384]: 2026-01-23 11:49:46.751031177 +0000 UTC m=+0.076552250 container health_status 48bfd3e93cfb033a8917f154ab637a84f3f60f7609564292c230ce848bae7693 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 23 11:49:46 compute-0 podman[239385]: 2026-01-23 11:49:46.754696067 +0000 UTC m=+0.081815750 container health_status 6ec039018dddd109dd56b3f3912ce4a80c166b5fb98c417c5e3cfbbdfbfbeaad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20260120, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Jan 23 11:49:47 compute-0 nova_compute[185173]: 2026-01-23 11:49:47.175 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:49:48 compute-0 podman[239427]: 2026-01-23 11:49:48.771876777 +0000 UTC m=+0.104693915 container health_status d96827cd9c29e53bbdf4cef10942608e4ba405294733072b4aa624c0238e2ed8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 23 11:49:50 compute-0 podman[239446]: 2026-01-23 11:49:50.763057256 +0000 UTC m=+0.099016764 container health_status 1cc877fed4914980324cf4c0d6ba23743fd113442cee4d49cc1a59e402757170 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Jan 23 11:49:51 compute-0 nova_compute[185173]: 2026-01-23 11:49:51.469 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:49:52 compute-0 nova_compute[185173]: 2026-01-23 11:49:52.178 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:49:56 compute-0 nova_compute[185173]: 2026-01-23 11:49:56.472 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:49:57 compute-0 nova_compute[185173]: 2026-01-23 11:49:57.182 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:49:57 compute-0 podman[239474]: 2026-01-23 11:49:57.797106523 +0000 UTC m=+0.125924759 container health_status adf529ba1b6aae11f18bcfacdd7f5850af0b6e6af2250d4a705be9c346f3f5af (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_ipmi, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible)
Jan 23 11:49:59 compute-0 podman[239492]: 2026-01-23 11:49:59.746239024 +0000 UTC m=+0.081660507 container health_status 900ef841977ab427bb05b895d10e0cac749b9185cccc7bb7aaf2b3886aa6449a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9, vcs-type=git, io.buildah.version=1.29.0, build-date=2024-09-18T21:23:30, release-0.7.12=, com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, release=1214.1726694543, architecture=x86_64, container_name=kepler, vendor=Red Hat, Inc., version=9.4, maintainer=Red Hat, Inc.)
Jan 23 11:49:59 compute-0 podman[201022]: time="2026-01-23T11:49:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 23 11:49:59 compute-0 podman[201022]: @ - - [23/Jan/2026:11:49:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28508 "" "Go-http-client/1.1"
Jan 23 11:49:59 compute-0 podman[201022]: @ - - [23/Jan/2026:11:49:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4363 "" "Go-http-client/1.1"
Jan 23 11:50:01 compute-0 openstack_network_exporter[204160]: ERROR   11:50:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 23 11:50:01 compute-0 openstack_network_exporter[204160]: 
Jan 23 11:50:01 compute-0 openstack_network_exporter[204160]: ERROR   11:50:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 23 11:50:01 compute-0 openstack_network_exporter[204160]: 
Jan 23 11:50:01 compute-0 nova_compute[185173]: 2026-01-23 11:50:01.475 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:50:02 compute-0 nova_compute[185173]: 2026-01-23 11:50:02.185 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:50:03 compute-0 podman[239511]: 2026-01-23 11:50:03.770639358 +0000 UTC m=+0.103197917 container health_status 99ee297e6e25b500e7af118e58bbafc761d2fd7202cdfcf4c976c2a99866b5ef (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 23 11:50:06 compute-0 nova_compute[185173]: 2026-01-23 11:50:06.477 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:50:07 compute-0 nova_compute[185173]: 2026-01-23 11:50:07.188 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:50:07 compute-0 nova_compute[185173]: 2026-01-23 11:50:07.235 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:50:08 compute-0 podman[239534]: 2026-01-23 11:50:08.771675004 +0000 UTC m=+0.103850114 container health_status cde20f10ae383cce1365a41265bac0a75ea71c31a21a1539f187bef9d678e8d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, version=9.6, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers, release=1755695350, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, config_id=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, distribution-scope=public, io.openshift.expose-services=)
Jan 23 11:50:09 compute-0 nova_compute[185173]: 2026-01-23 11:50:09.230 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:50:09 compute-0 nova_compute[185173]: 2026-01-23 11:50:09.235 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:50:09 compute-0 nova_compute[185173]: 2026-01-23 11:50:09.267 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 11:50:09 compute-0 nova_compute[185173]: 2026-01-23 11:50:09.268 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 11:50:09 compute-0 nova_compute[185173]: 2026-01-23 11:50:09.269 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 11:50:09 compute-0 nova_compute[185173]: 2026-01-23 11:50:09.270 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 23 11:50:09 compute-0 nova_compute[185173]: 2026-01-23 11:50:09.367 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:50:09 compute-0 nova_compute[185173]: 2026-01-23 11:50:09.467 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk --force-share --output=json" returned: 0 in 0.100s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:50:09 compute-0 nova_compute[185173]: 2026-01-23 11:50:09.469 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:50:09 compute-0 nova_compute[185173]: 2026-01-23 11:50:09.534 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:50:09 compute-0 nova_compute[185173]: 2026-01-23 11:50:09.536 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:50:09 compute-0 nova_compute[185173]: 2026-01-23 11:50:09.634 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.eph0 --force-share --output=json" returned: 0 in 0.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:50:09 compute-0 nova_compute[185173]: 2026-01-23 11:50:09.635 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:50:09 compute-0 nova_compute[185173]: 2026-01-23 11:50:09.698 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.eph0 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:50:09 compute-0 nova_compute[185173]: 2026-01-23 11:50:09.708 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/84b3f69a-6ab7-406d-939b-a485518755a5/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:50:09 compute-0 nova_compute[185173]: 2026-01-23 11:50:09.799 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/84b3f69a-6ab7-406d-939b-a485518755a5/disk --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:50:09 compute-0 nova_compute[185173]: 2026-01-23 11:50:09.800 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/84b3f69a-6ab7-406d-939b-a485518755a5/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:50:09 compute-0 nova_compute[185173]: 2026-01-23 11:50:09.860 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/84b3f69a-6ab7-406d-939b-a485518755a5/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:50:09 compute-0 nova_compute[185173]: 2026-01-23 11:50:09.861 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/84b3f69a-6ab7-406d-939b-a485518755a5/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:50:09 compute-0 nova_compute[185173]: 2026-01-23 11:50:09.922 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/84b3f69a-6ab7-406d-939b-a485518755a5/disk.eph0 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:50:09 compute-0 nova_compute[185173]: 2026-01-23 11:50:09.924 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/84b3f69a-6ab7-406d-939b-a485518755a5/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:50:10 compute-0 nova_compute[185173]: 2026-01-23 11:50:10.016 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/84b3f69a-6ab7-406d-939b-a485518755a5/disk.eph0 --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:50:10 compute-0 nova_compute[185173]: 2026-01-23 11:50:10.405 185177 WARNING nova.virt.libvirt.driver [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 23 11:50:10 compute-0 nova_compute[185173]: 2026-01-23 11:50:10.407 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4979MB free_disk=72.40069198608398GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 23 11:50:10 compute-0 nova_compute[185173]: 2026-01-23 11:50:10.407 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 11:50:10 compute-0 nova_compute[185173]: 2026-01-23 11:50:10.407 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 11:50:10 compute-0 nova_compute[185173]: 2026-01-23 11:50:10.488 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Instance 55846fbf-a87a-4cba-be0b-23125d3d9ef4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 23 11:50:10 compute-0 nova_compute[185173]: 2026-01-23 11:50:10.488 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Instance 84b3f69a-6ab7-406d-939b-a485518755a5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 23 11:50:10 compute-0 nova_compute[185173]: 2026-01-23 11:50:10.489 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 23 11:50:10 compute-0 nova_compute[185173]: 2026-01-23 11:50:10.489 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 23 11:50:10 compute-0 nova_compute[185173]: 2026-01-23 11:50:10.545 185177 DEBUG nova.compute.provider_tree [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Inventory has not changed in ProviderTree for provider: 77dd020c-2f5c-40b0-b660-8a95a28aabbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 23 11:50:10 compute-0 nova_compute[185173]: 2026-01-23 11:50:10.569 185177 DEBUG nova.scheduler.client.report [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Inventory has not changed for provider 77dd020c-2f5c-40b0-b660-8a95a28aabbd based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 23 11:50:10 compute-0 nova_compute[185173]: 2026-01-23 11:50:10.570 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 23 11:50:10 compute-0 nova_compute[185173]: 2026-01-23 11:50:10.571 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.163s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 11:50:11 compute-0 nova_compute[185173]: 2026-01-23 11:50:11.479 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:50:11 compute-0 nova_compute[185173]: 2026-01-23 11:50:11.571 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:50:11 compute-0 nova_compute[185173]: 2026-01-23 11:50:11.618 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:50:11 compute-0 nova_compute[185173]: 2026-01-23 11:50:11.619 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:50:11 compute-0 nova_compute[185173]: 2026-01-23 11:50:11.620 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:50:12 compute-0 nova_compute[185173]: 2026-01-23 11:50:12.191 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:50:13 compute-0 nova_compute[185173]: 2026-01-23 11:50:13.235 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:50:13 compute-0 nova_compute[185173]: 2026-01-23 11:50:13.236 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 23 11:50:13 compute-0 nova_compute[185173]: 2026-01-23 11:50:13.727 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Acquiring lock "refresh_cache-84b3f69a-6ab7-406d-939b-a485518755a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 23 11:50:13 compute-0 nova_compute[185173]: 2026-01-23 11:50:13.728 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Acquired lock "refresh_cache-84b3f69a-6ab7-406d-939b-a485518755a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 23 11:50:13 compute-0 nova_compute[185173]: 2026-01-23 11:50:13.729 185177 DEBUG nova.network.neutron [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] [instance: 84b3f69a-6ab7-406d-939b-a485518755a5] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 23 11:50:15 compute-0 nova_compute[185173]: 2026-01-23 11:50:15.091 185177 DEBUG nova.network.neutron [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] [instance: 84b3f69a-6ab7-406d-939b-a485518755a5] Updating instance_info_cache with network_info: [{"id": "05dcc60f-5c09-47f3-9834-3594bf71b68e", "address": "fa:16:3e:40:4f:a6", "network": {"id": "9d2c33ef-0f52-43b5-80dd-899657aece53", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.62", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bd16a0de2f5e4a8480a855ef0e1a3f14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05dcc60f-5c", "ovs_interfaceid": "05dcc60f-5c09-47f3-9834-3594bf71b68e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 23 11:50:15 compute-0 nova_compute[185173]: 2026-01-23 11:50:15.129 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Releasing lock "refresh_cache-84b3f69a-6ab7-406d-939b-a485518755a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 23 11:50:15 compute-0 nova_compute[185173]: 2026-01-23 11:50:15.130 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] [instance: 84b3f69a-6ab7-406d-939b-a485518755a5] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 23 11:50:15 compute-0 nova_compute[185173]: 2026-01-23 11:50:15.131 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:50:15 compute-0 nova_compute[185173]: 2026-01-23 11:50:15.131 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:50:15 compute-0 nova_compute[185173]: 2026-01-23 11:50:15.132 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 23 11:50:16 compute-0 nova_compute[185173]: 2026-01-23 11:50:16.481 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:50:17 compute-0 nova_compute[185173]: 2026-01-23 11:50:17.194 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:50:17 compute-0 podman[239581]: 2026-01-23 11:50:17.728769139 +0000 UTC m=+0.060220317 container health_status 48bfd3e93cfb033a8917f154ab637a84f3f60f7609564292c230ce848bae7693 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Jan 23 11:50:17 compute-0 podman[239582]: 2026-01-23 11:50:17.75434772 +0000 UTC m=+0.082978889 container health_status 6ec039018dddd109dd56b3f3912ce4a80c166b5fb98c417c5e3cfbbdfbfbeaad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, org.label-schema.build-date=20260120, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute)
Jan 23 11:50:19 compute-0 podman[239626]: 2026-01-23 11:50:19.742868402 +0000 UTC m=+0.075022572 container health_status d96827cd9c29e53bbdf4cef10942608e4ba405294733072b4aa624c0238e2ed8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 23 11:50:20 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Jan 23 11:50:21 compute-0 nova_compute[185173]: 2026-01-23 11:50:21.484 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:50:21 compute-0 podman[239645]: 2026-01-23 11:50:21.808920289 +0000 UTC m=+0.141751159 container health_status 1cc877fed4914980324cf4c0d6ba23743fd113442cee4d49cc1a59e402757170 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 23 11:50:22 compute-0 nova_compute[185173]: 2026-01-23 11:50:22.196 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:50:26 compute-0 nova_compute[185173]: 2026-01-23 11:50:26.488 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:50:27 compute-0 nova_compute[185173]: 2026-01-23 11:50:27.199 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:50:28 compute-0 podman[239671]: 2026-01-23 11:50:28.734100739 +0000 UTC m=+0.065283372 container health_status adf529ba1b6aae11f18bcfacdd7f5850af0b6e6af2250d4a705be9c346f3f5af (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 23 11:50:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:50:29.094 106832 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 11:50:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:50:29.094 106832 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 11:50:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:50:29.094 106832 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 11:50:29 compute-0 podman[201022]: time="2026-01-23T11:50:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 23 11:50:29 compute-0 podman[201022]: @ - - [23/Jan/2026:11:50:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28508 "" "Go-http-client/1.1"
Jan 23 11:50:29 compute-0 podman[201022]: @ - - [23/Jan/2026:11:50:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4362 "" "Go-http-client/1.1"
Jan 23 11:50:30 compute-0 podman[239689]: 2026-01-23 11:50:30.755413959 +0000 UTC m=+0.089650197 container health_status 900ef841977ab427bb05b895d10e0cac749b9185cccc7bb7aaf2b3886aa6449a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, name=ubi9, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, io.buildah.version=1.29.0, config_id=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release-0.7.12=, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, io.openshift.expose-services=, io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, managed_by=edpm_ansible, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 23 11:50:31 compute-0 openstack_network_exporter[204160]: ERROR   11:50:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 23 11:50:31 compute-0 openstack_network_exporter[204160]: 
Jan 23 11:50:31 compute-0 openstack_network_exporter[204160]: ERROR   11:50:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 23 11:50:31 compute-0 openstack_network_exporter[204160]: 
Jan 23 11:50:31 compute-0 nova_compute[185173]: 2026-01-23 11:50:31.491 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:50:32 compute-0 nova_compute[185173]: 2026-01-23 11:50:32.202 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:50:34 compute-0 podman[239709]: 2026-01-23 11:50:34.730539998 +0000 UTC m=+0.065400697 container health_status 99ee297e6e25b500e7af118e58bbafc761d2fd7202cdfcf4c976c2a99866b5ef (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 23 11:50:36 compute-0 nova_compute[185173]: 2026-01-23 11:50:36.494 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:50:37 compute-0 nova_compute[185173]: 2026-01-23 11:50:37.206 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:50:39 compute-0 podman[239734]: 2026-01-23 11:50:39.745895942 +0000 UTC m=+0.079055738 container health_status cde20f10ae383cce1365a41265bac0a75ea71c31a21a1539f187bef9d678e8d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, architecture=x86_64, container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, release=1755695350, vendor=Red Hat, Inc., version=9.6, name=ubi9-minimal, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Jan 23 11:50:41 compute-0 nova_compute[185173]: 2026-01-23 11:50:41.497 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:50:42 compute-0 nova_compute[185173]: 2026-01-23 11:50:42.209 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:50:46 compute-0 nova_compute[185173]: 2026-01-23 11:50:46.499 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:50:47 compute-0 nova_compute[185173]: 2026-01-23 11:50:47.213 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:50:48 compute-0 podman[239753]: 2026-01-23 11:50:48.756461376 +0000 UTC m=+0.082224523 container health_status 48bfd3e93cfb033a8917f154ab637a84f3f60f7609564292c230ce848bae7693 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 23 11:50:48 compute-0 podman[239754]: 2026-01-23 11:50:48.773320332 +0000 UTC m=+0.086691088 container health_status 6ec039018dddd109dd56b3f3912ce4a80c166b5fb98c417c5e3cfbbdfbfbeaad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, org.label-schema.build-date=20260120, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_managed=true, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute)
Jan 23 11:50:50 compute-0 podman[239798]: 2026-01-23 11:50:50.734601769 +0000 UTC m=+0.069230217 container health_status d96827cd9c29e53bbdf4cef10942608e4ba405294733072b4aa624c0238e2ed8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent)
Jan 23 11:50:51 compute-0 nova_compute[185173]: 2026-01-23 11:50:51.501 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:50:52 compute-0 nova_compute[185173]: 2026-01-23 11:50:52.216 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:50:52 compute-0 podman[239816]: 2026-01-23 11:50:52.760136426 +0000 UTC m=+0.092987995 container health_status 1cc877fed4914980324cf4c0d6ba23743fd113442cee4d49cc1a59e402757170 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 23 11:50:56 compute-0 nova_compute[185173]: 2026-01-23 11:50:56.503 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:50:57 compute-0 nova_compute[185173]: 2026-01-23 11:50:57.219 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:50:59 compute-0 podman[201022]: time="2026-01-23T11:50:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 23 11:50:59 compute-0 podman[201022]: @ - - [23/Jan/2026:11:50:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28508 "" "Go-http-client/1.1"
Jan 23 11:50:59 compute-0 podman[201022]: @ - - [23/Jan/2026:11:50:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4364 "" "Go-http-client/1.1"
Jan 23 11:50:59 compute-0 podman[239845]: 2026-01-23 11:50:59.785905931 +0000 UTC m=+0.109426792 container health_status adf529ba1b6aae11f18bcfacdd7f5850af0b6e6af2250d4a705be9c346f3f5af (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Jan 23 11:51:01 compute-0 openstack_network_exporter[204160]: ERROR   11:51:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 23 11:51:01 compute-0 openstack_network_exporter[204160]: 
Jan 23 11:51:01 compute-0 openstack_network_exporter[204160]: ERROR   11:51:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 23 11:51:01 compute-0 openstack_network_exporter[204160]: 
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.452 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.452 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.452 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2842758a70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.453 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f28410bc7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.453 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be810>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2842758a70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.454 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be840>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2842758a70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.454 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2842758a70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.454 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be8a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2842758a70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.454 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc8f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2842758a70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.454 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be900>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2842758a70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.454 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bf140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2842758a70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.454 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be960>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2842758a70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.454 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2842f61190>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2842758a70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.454 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28411c9190>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2842758a70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.455 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be9c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2842758a70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.455 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bf1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2842758a70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.455 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bec00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2842758a70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.455 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bf440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2842758a70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.455 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bec60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2842758a70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.455 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2842f83560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2842758a70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.455 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2842758a70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.455 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc5c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2842758a70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.455 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2842758a70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.456 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be660>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2842758a70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.456 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2842758a70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.456 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2842758a70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.456 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2842f1af60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2842758a70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.456 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2842758a70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.456 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be7b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2842758a70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.458 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '55846fbf-a87a-4cba-be0b-23125d3d9ef4', 'name': 'test_0', 'flavor': {'id': 'f2c5c5dd-a580-4885-a3ab-a766eac401c8', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'c5833e41-b4db-454e-8f49-014aa18c7dc5'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'bd16a0de2f5e4a8480a855ef0e1a3f14', 'user_id': 'd9858533c2284846a8f0f19a1fb45045', 'hostId': '47f89b8956aaa9163f724166aabd4216eadbb2bd951d24f4c87e1ecb', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.461 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '84b3f69a-6ab7-406d-939b-a485518755a5', 'name': 'vn-i4gqh4k-vr2au76lt4jq-fptc6vwdy3ol-vnf-bciscawcuiyk', 'flavor': {'id': 'f2c5c5dd-a580-4885-a3ab-a766eac401c8', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'c5833e41-b4db-454e-8f49-014aa18c7dc5'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'bd16a0de2f5e4a8480a855ef0e1a3f14', 'user_id': 'd9858533c2284846a8f0f19a1fb45045', 'hostId': '47f89b8956aaa9163f724166aabd4216eadbb2bd951d24f4c87e1ecb', 'status': 'active', 'metadata': {'metering.server_group': '500baa09-1e39-474e-b275-8b2dffe3a65b'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.461 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.461 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bc800>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.462 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bc800>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.462 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.462 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-01-23T11:51:01.462085) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.465 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.468 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/network.outgoing.bytes.delta volume: 3113 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.469 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.469 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f28410be7e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.469 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.469 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410be810>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.469 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410be810>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.469 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.470 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-01-23T11:51:01.469857) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.489 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.490 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.490 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:51:01 compute-0 nova_compute[185173]: 2026-01-23 11:51:01.505 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.517 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.usage volume: 21364736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.518 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.518 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.518 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.519 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f28411c9b80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.519 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.519 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410be840>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.519 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410be840>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.519 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.520 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-01-23T11:51:01.519463) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.572 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.572 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.572 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.634 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.write.bytes volume: 41836544 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.635 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.635 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.636 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.636 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f28410bc830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.636 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.636 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f28410be870>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.636 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.636 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410be8a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.636 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410be8a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.636 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.637 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.write.latency volume: 1669208630 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.637 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.write.latency volume: 8106790 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.637 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-01-23T11:51:01.636878) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.637 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.637 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.write.latency volume: 801641355 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.638 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.write.latency volume: 8862519 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.638 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.638 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.638 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f28410bc8c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.639 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.639 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bc8f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.639 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bc8f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.639 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.639 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.639 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.640 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.640 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f28410be8d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.640 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-01-23T11:51:01.639359) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.640 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.640 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410be900>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.640 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410be900>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.640 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.641 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.write.requests volume: 234 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.641 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.641 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.642 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.write.requests volume: 239 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.642 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.642 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.643 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.643 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f28410bef30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.643 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.643 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bf140>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.643 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-01-23T11:51:01.640867) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.643 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bf140>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.644 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.644 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.644 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.644 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.644 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f28410be930>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.645 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.645 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410be960>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.645 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410be960>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.645 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.645 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.645 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f28410be750>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.646 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.646 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f2842f61190>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.646 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f2842f61190>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.646 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-01-23T11:51:01.644050) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.646 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.646 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-01-23T11:51:01.645354) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.646 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-01-23T11:51:01.646595) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.646 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.read.latency volume: 639933059 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.647 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.read.latency volume: 72530295 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.647 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.read.latency volume: 43879093 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.647 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.read.latency volume: 363540160 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.647 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.read.latency volume: 61167194 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.648 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.read.latency volume: 48392812 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.648 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.648 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f28411a4c50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.648 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.648 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28411c9190>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.648 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28411c9190>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.649 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.649 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.649 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-01-23T11:51:01.648991) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.649 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.649 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.650 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.650 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.650 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.650 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.651 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f28410be990>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.651 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.651 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410be9c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.651 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410be9c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.651 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.651 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.652 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f28410bf1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.652 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.652 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bf1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.652 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bf1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.652 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-01-23T11:51:01.651446) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.652 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.652 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.653 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-01-23T11:51:01.652630) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.653 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.653 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.653 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f28410bebd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.653 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.653 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bec00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.653 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bec00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.653 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.654 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-01-23T11:51:01.653926) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.676 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/memory.usage volume: 48.9140625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.710 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/memory.usage volume: 49.19140625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.712 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.712 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f28410bf410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.712 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.712 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bf440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.713 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bf440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.713 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.713 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.714 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.714 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-01-23T11:51:01.713351) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.715 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.715 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f28410bec30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.715 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.715 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bec60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.716 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bec60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.716 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.716 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/network.incoming.bytes volume: 1968 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.717 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/network.incoming.bytes volume: 4891 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.717 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-01-23T11:51:01.716184) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.718 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.718 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f28410bcfb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.718 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.718 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f2842f83560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.719 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f2842f83560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.719 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.719 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/cpu volume: 35030000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.720 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/cpu volume: 77690000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.720 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.721 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f28410bc920>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.721 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.721 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f28410bc5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.721 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.721 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bc5c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.722 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bc5c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.722 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.722 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/network.incoming.packets volume: 17 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.723 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/network.incoming.packets volume: 32 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.723 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.724 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f28410bc890>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.724 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-01-23T11:51:01.719218) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.724 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.724 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-01-23T11:51:01.722484) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.724 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bc650>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.725 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bc650>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.725 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.725 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.726 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.727 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.727 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f28410be720>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.727 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.727 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410be660>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.728 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410be660>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.728 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.728 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-01-23T11:51:01.725213) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.729 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.729 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-01-23T11:51:01.728827) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.729 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.730 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.731 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.731 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.732 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.732 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.733 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f28410bc6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.733 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.733 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bc680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.733 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bc680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.733 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.733 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/network.outgoing.packets volume: 21 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.734 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/network.outgoing.packets volume: 44 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.734 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.734 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f28410bec90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.734 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.735 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bc6e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.735 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bc6e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.735 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.735 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-01-23T11:51:01.733581) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.735 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.736 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/network.incoming.bytes.delta volume: 3405 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.736 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.737 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f284322b260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.737 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.737 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f2842f1af60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.737 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f2842f1af60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.737 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.737 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.737 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-01-23T11:51:01.735802) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.738 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-01-23T11:51:01.737700) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.738 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.738 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.738 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.739 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.739 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.740 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.740 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f28410bc740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.740 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.740 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bc770>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.740 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bc770>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.741 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.741 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/network.outgoing.bytes volume: 2202 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.741 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/network.outgoing.bytes volume: 4934 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.742 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.743 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f28410be780>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.743 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-01-23T11:51:01.740971) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.743 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.743 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410be7b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.743 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410be7b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.743 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.743 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.744 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.744 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.744 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.744 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.745 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.745 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.745 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.746 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.746 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.746 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.746 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-01-23T11:51:01.743688) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.746 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.746 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.746 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.746 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.746 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.746 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.746 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.747 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.747 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.747 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.747 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.747 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.747 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.747 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.747 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.747 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.747 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.747 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.747 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.747 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.748 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:51:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:51:01.748 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:51:01 compute-0 podman[239865]: 2026-01-23 11:51:01.765060238 +0000 UTC m=+0.095700108 container health_status 900ef841977ab427bb05b895d10e0cac749b9185cccc7bb7aaf2b3886aa6449a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, vendor=Red Hat, Inc., version=9.4, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, com.redhat.component=ubi9-container, config_id=kepler, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, io.openshift.expose-services=, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, io.buildah.version=1.29.0, name=ubi9)
Jan 23 11:51:02 compute-0 nova_compute[185173]: 2026-01-23 11:51:02.222 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:51:05 compute-0 podman[239883]: 2026-01-23 11:51:05.743122897 +0000 UTC m=+0.077559563 container health_status 99ee297e6e25b500e7af118e58bbafc761d2fd7202cdfcf4c976c2a99866b5ef (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 23 11:51:06 compute-0 nova_compute[185173]: 2026-01-23 11:51:06.510 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:51:07 compute-0 nova_compute[185173]: 2026-01-23 11:51:07.225 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:51:08 compute-0 nova_compute[185173]: 2026-01-23 11:51:08.236 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:51:09 compute-0 nova_compute[185173]: 2026-01-23 11:51:09.236 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:51:09 compute-0 nova_compute[185173]: 2026-01-23 11:51:09.269 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 11:51:09 compute-0 nova_compute[185173]: 2026-01-23 11:51:09.271 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 11:51:09 compute-0 nova_compute[185173]: 2026-01-23 11:51:09.272 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 11:51:09 compute-0 nova_compute[185173]: 2026-01-23 11:51:09.273 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 23 11:51:09 compute-0 nova_compute[185173]: 2026-01-23 11:51:09.376 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:51:09 compute-0 nova_compute[185173]: 2026-01-23 11:51:09.441 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:51:09 compute-0 nova_compute[185173]: 2026-01-23 11:51:09.443 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:51:09 compute-0 nova_compute[185173]: 2026-01-23 11:51:09.506 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:51:09 compute-0 nova_compute[185173]: 2026-01-23 11:51:09.508 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:51:09 compute-0 nova_compute[185173]: 2026-01-23 11:51:09.567 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.eph0 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:51:09 compute-0 nova_compute[185173]: 2026-01-23 11:51:09.570 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:51:09 compute-0 nova_compute[185173]: 2026-01-23 11:51:09.635 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.eph0 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:51:09 compute-0 nova_compute[185173]: 2026-01-23 11:51:09.647 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/84b3f69a-6ab7-406d-939b-a485518755a5/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:51:09 compute-0 nova_compute[185173]: 2026-01-23 11:51:09.712 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/84b3f69a-6ab7-406d-939b-a485518755a5/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:51:09 compute-0 nova_compute[185173]: 2026-01-23 11:51:09.715 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/84b3f69a-6ab7-406d-939b-a485518755a5/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:51:09 compute-0 nova_compute[185173]: 2026-01-23 11:51:09.780 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/84b3f69a-6ab7-406d-939b-a485518755a5/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:51:09 compute-0 nova_compute[185173]: 2026-01-23 11:51:09.782 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/84b3f69a-6ab7-406d-939b-a485518755a5/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:51:09 compute-0 nova_compute[185173]: 2026-01-23 11:51:09.856 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/84b3f69a-6ab7-406d-939b-a485518755a5/disk.eph0 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:51:09 compute-0 nova_compute[185173]: 2026-01-23 11:51:09.857 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/84b3f69a-6ab7-406d-939b-a485518755a5/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:51:09 compute-0 nova_compute[185173]: 2026-01-23 11:51:09.929 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/84b3f69a-6ab7-406d-939b-a485518755a5/disk.eph0 --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:51:10 compute-0 nova_compute[185173]: 2026-01-23 11:51:10.262 185177 WARNING nova.virt.libvirt.driver [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 23 11:51:10 compute-0 nova_compute[185173]: 2026-01-23 11:51:10.264 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4942MB free_disk=72.40071105957031GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 23 11:51:10 compute-0 nova_compute[185173]: 2026-01-23 11:51:10.265 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 11:51:10 compute-0 nova_compute[185173]: 2026-01-23 11:51:10.265 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 11:51:10 compute-0 nova_compute[185173]: 2026-01-23 11:51:10.419 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Instance 55846fbf-a87a-4cba-be0b-23125d3d9ef4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 23 11:51:10 compute-0 nova_compute[185173]: 2026-01-23 11:51:10.420 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Instance 84b3f69a-6ab7-406d-939b-a485518755a5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 23 11:51:10 compute-0 nova_compute[185173]: 2026-01-23 11:51:10.421 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 23 11:51:10 compute-0 nova_compute[185173]: 2026-01-23 11:51:10.421 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 23 11:51:10 compute-0 nova_compute[185173]: 2026-01-23 11:51:10.487 185177 DEBUG nova.compute.provider_tree [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Inventory has not changed in ProviderTree for provider: 77dd020c-2f5c-40b0-b660-8a95a28aabbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 23 11:51:10 compute-0 nova_compute[185173]: 2026-01-23 11:51:10.501 185177 DEBUG nova.scheduler.client.report [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Inventory has not changed for provider 77dd020c-2f5c-40b0-b660-8a95a28aabbd based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 23 11:51:10 compute-0 nova_compute[185173]: 2026-01-23 11:51:10.503 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 23 11:51:10 compute-0 nova_compute[185173]: 2026-01-23 11:51:10.504 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.239s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 11:51:10 compute-0 podman[239930]: 2026-01-23 11:51:10.768387872 +0000 UTC m=+0.099566429 container health_status cde20f10ae383cce1365a41265bac0a75ea71c31a21a1539f187bef9d678e8d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., managed_by=edpm_ansible, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., distribution-scope=public, config_id=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, container_name=openstack_network_exporter, version=9.6, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350)
Jan 23 11:51:11 compute-0 nova_compute[185173]: 2026-01-23 11:51:11.504 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:51:11 compute-0 nova_compute[185173]: 2026-01-23 11:51:11.504 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:51:11 compute-0 nova_compute[185173]: 2026-01-23 11:51:11.504 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:51:11 compute-0 nova_compute[185173]: 2026-01-23 11:51:11.515 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:51:12 compute-0 nova_compute[185173]: 2026-01-23 11:51:12.227 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:51:12 compute-0 nova_compute[185173]: 2026-01-23 11:51:12.235 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:51:14 compute-0 nova_compute[185173]: 2026-01-23 11:51:14.235 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:51:15 compute-0 nova_compute[185173]: 2026-01-23 11:51:15.234 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:51:15 compute-0 nova_compute[185173]: 2026-01-23 11:51:15.235 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 23 11:51:15 compute-0 nova_compute[185173]: 2026-01-23 11:51:15.236 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 23 11:51:15 compute-0 nova_compute[185173]: 2026-01-23 11:51:15.774 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Acquiring lock "refresh_cache-55846fbf-a87a-4cba-be0b-23125d3d9ef4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 23 11:51:15 compute-0 nova_compute[185173]: 2026-01-23 11:51:15.775 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Acquired lock "refresh_cache-55846fbf-a87a-4cba-be0b-23125d3d9ef4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 23 11:51:15 compute-0 nova_compute[185173]: 2026-01-23 11:51:15.775 185177 DEBUG nova.network.neutron [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] [instance: 55846fbf-a87a-4cba-be0b-23125d3d9ef4] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 23 11:51:15 compute-0 nova_compute[185173]: 2026-01-23 11:51:15.776 185177 DEBUG nova.objects.instance [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 55846fbf-a87a-4cba-be0b-23125d3d9ef4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 23 11:51:16 compute-0 nova_compute[185173]: 2026-01-23 11:51:16.517 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:51:17 compute-0 nova_compute[185173]: 2026-01-23 11:51:17.125 185177 DEBUG nova.network.neutron [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] [instance: 55846fbf-a87a-4cba-be0b-23125d3d9ef4] Updating instance_info_cache with network_info: [{"id": "4c18896b-ecf0-4d1b-b901-f24edce45c11", "address": "fa:16:3e:e4:21:a1", "network": {"id": "9d2c33ef-0f52-43b5-80dd-899657aece53", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.65", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bd16a0de2f5e4a8480a855ef0e1a3f14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c18896b-ec", "ovs_interfaceid": "4c18896b-ecf0-4d1b-b901-f24edce45c11", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 23 11:51:17 compute-0 nova_compute[185173]: 2026-01-23 11:51:17.146 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Releasing lock "refresh_cache-55846fbf-a87a-4cba-be0b-23125d3d9ef4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 23 11:51:17 compute-0 nova_compute[185173]: 2026-01-23 11:51:17.147 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] [instance: 55846fbf-a87a-4cba-be0b-23125d3d9ef4] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 23 11:51:17 compute-0 nova_compute[185173]: 2026-01-23 11:51:17.148 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:51:17 compute-0 nova_compute[185173]: 2026-01-23 11:51:17.148 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 23 11:51:17 compute-0 nova_compute[185173]: 2026-01-23 11:51:17.229 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:51:19 compute-0 podman[239953]: 2026-01-23 11:51:19.728692667 +0000 UTC m=+0.063493343 container health_status 48bfd3e93cfb033a8917f154ab637a84f3f60f7609564292c230ce848bae7693 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 23 11:51:19 compute-0 podman[239954]: 2026-01-23 11:51:19.763506804 +0000 UTC m=+0.095884063 container health_status 6ec039018dddd109dd56b3f3912ce4a80c166b5fb98c417c5e3cfbbdfbfbeaad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260120, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0)
Jan 23 11:51:21 compute-0 nova_compute[185173]: 2026-01-23 11:51:21.519 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:51:21 compute-0 podman[239996]: 2026-01-23 11:51:21.810846943 +0000 UTC m=+0.123501662 container health_status d96827cd9c29e53bbdf4cef10942608e4ba405294733072b4aa624c0238e2ed8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3)
Jan 23 11:51:22 compute-0 nova_compute[185173]: 2026-01-23 11:51:22.232 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:51:23 compute-0 podman[240014]: 2026-01-23 11:51:23.778442448 +0000 UTC m=+0.110585827 container health_status 1cc877fed4914980324cf4c0d6ba23743fd113442cee4d49cc1a59e402757170 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller)
Jan 23 11:51:26 compute-0 nova_compute[185173]: 2026-01-23 11:51:26.521 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:51:27 compute-0 nova_compute[185173]: 2026-01-23 11:51:27.234 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:51:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:51:29.095 106832 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 11:51:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:51:29.097 106832 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 11:51:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:51:29.098 106832 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 11:51:29 compute-0 podman[201022]: time="2026-01-23T11:51:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 23 11:51:29 compute-0 podman[201022]: @ - - [23/Jan/2026:11:51:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28508 "" "Go-http-client/1.1"
Jan 23 11:51:29 compute-0 podman[201022]: @ - - [23/Jan/2026:11:51:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4362 "" "Go-http-client/1.1"
Jan 23 11:51:30 compute-0 podman[240038]: 2026-01-23 11:51:30.748135176 +0000 UTC m=+0.081687719 container health_status adf529ba1b6aae11f18bcfacdd7f5850af0b6e6af2250d4a705be9c346f3f5af (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 23 11:51:31 compute-0 openstack_network_exporter[204160]: ERROR   11:51:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 23 11:51:31 compute-0 openstack_network_exporter[204160]: 
Jan 23 11:51:31 compute-0 openstack_network_exporter[204160]: ERROR   11:51:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 23 11:51:31 compute-0 openstack_network_exporter[204160]: 
Jan 23 11:51:31 compute-0 nova_compute[185173]: 2026-01-23 11:51:31.523 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:51:32 compute-0 nova_compute[185173]: 2026-01-23 11:51:32.238 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:51:32 compute-0 podman[240055]: 2026-01-23 11:51:32.7441923 +0000 UTC m=+0.078339831 container health_status 900ef841977ab427bb05b895d10e0cac749b9185cccc7bb7aaf2b3886aa6449a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, version=9.4, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.openshift.tags=base rhel9, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=kepler, name=ubi9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., release=1214.1726694543, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9)
Jan 23 11:51:36 compute-0 nova_compute[185173]: 2026-01-23 11:51:36.524 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:51:36 compute-0 podman[240075]: 2026-01-23 11:51:36.71378066 +0000 UTC m=+0.050607560 container health_status 99ee297e6e25b500e7af118e58bbafc761d2fd7202cdfcf4c976c2a99866b5ef (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 23 11:51:37 compute-0 nova_compute[185173]: 2026-01-23 11:51:37.239 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:51:41 compute-0 nova_compute[185173]: 2026-01-23 11:51:41.528 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:51:41 compute-0 podman[240098]: 2026-01-23 11:51:41.732715896 +0000 UTC m=+0.070044306 container health_status cde20f10ae383cce1365a41265bac0a75ea71c31a21a1539f187bef9d678e8d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, name=ubi9-minimal, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, config_id=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350)
Jan 23 11:51:42 compute-0 nova_compute[185173]: 2026-01-23 11:51:42.242 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:51:46 compute-0 nova_compute[185173]: 2026-01-23 11:51:46.532 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:51:47 compute-0 nova_compute[185173]: 2026-01-23 11:51:47.246 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:51:50 compute-0 podman[240121]: 2026-01-23 11:51:50.747032129 +0000 UTC m=+0.069014622 container health_status 6ec039018dddd109dd56b3f3912ce4a80c166b5fb98c417c5e3cfbbdfbfbeaad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20260120, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute)
Jan 23 11:51:50 compute-0 podman[240120]: 2026-01-23 11:51:50.757064595 +0000 UTC m=+0.085449248 container health_status 48bfd3e93cfb033a8917f154ab637a84f3f60f7609564292c230ce848bae7693 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 23 11:51:51 compute-0 nova_compute[185173]: 2026-01-23 11:51:51.536 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:51:52 compute-0 nova_compute[185173]: 2026-01-23 11:51:52.249 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:51:52 compute-0 podman[240162]: 2026-01-23 11:51:52.75700481 +0000 UTC m=+0.084166378 container health_status d96827cd9c29e53bbdf4cef10942608e4ba405294733072b4aa624c0238e2ed8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202)
Jan 23 11:51:54 compute-0 podman[240181]: 2026-01-23 11:51:54.781194976 +0000 UTC m=+0.108644723 container health_status 1cc877fed4914980324cf4c0d6ba23743fd113442cee4d49cc1a59e402757170 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 23 11:51:56 compute-0 nova_compute[185173]: 2026-01-23 11:51:56.536 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:51:57 compute-0 nova_compute[185173]: 2026-01-23 11:51:57.252 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:51:59 compute-0 podman[201022]: time="2026-01-23T11:51:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 23 11:51:59 compute-0 podman[201022]: @ - - [23/Jan/2026:11:51:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28508 "" "Go-http-client/1.1"
Jan 23 11:51:59 compute-0 podman[201022]: @ - - [23/Jan/2026:11:51:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4363 "" "Go-http-client/1.1"
Jan 23 11:52:01 compute-0 openstack_network_exporter[204160]: ERROR   11:52:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 23 11:52:01 compute-0 openstack_network_exporter[204160]: 
Jan 23 11:52:01 compute-0 openstack_network_exporter[204160]: ERROR   11:52:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 23 11:52:01 compute-0 openstack_network_exporter[204160]: 
Jan 23 11:52:01 compute-0 nova_compute[185173]: 2026-01-23 11:52:01.538 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:52:01 compute-0 podman[240207]: 2026-01-23 11:52:01.730443763 +0000 UTC m=+0.063769419 container health_status adf529ba1b6aae11f18bcfacdd7f5850af0b6e6af2250d4a705be9c346f3f5af (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 11:52:02 compute-0 nova_compute[185173]: 2026-01-23 11:52:02.255 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:52:03 compute-0 podman[240227]: 2026-01-23 11:52:03.729395005 +0000 UTC m=+0.062016608 container health_status 900ef841977ab427bb05b895d10e0cac749b9185cccc7bb7aaf2b3886aa6449a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, managed_by=edpm_ansible, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., release-0.7.12=, io.k8s.display-name=Red Hat Universal Base Image 9, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=kepler, build-date=2024-09-18T21:23:30, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, release=1214.1726694543, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, io.openshift.expose-services=)
Jan 23 11:52:06 compute-0 nova_compute[185173]: 2026-01-23 11:52:06.539 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:52:07 compute-0 nova_compute[185173]: 2026-01-23 11:52:07.258 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:52:07 compute-0 podman[240247]: 2026-01-23 11:52:07.789789117 +0000 UTC m=+0.114806317 container health_status 99ee297e6e25b500e7af118e58bbafc761d2fd7202cdfcf4c976c2a99866b5ef (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 23 11:52:09 compute-0 nova_compute[185173]: 2026-01-23 11:52:09.237 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:52:10 compute-0 nova_compute[185173]: 2026-01-23 11:52:10.235 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:52:10 compute-0 nova_compute[185173]: 2026-01-23 11:52:10.279 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 11:52:10 compute-0 nova_compute[185173]: 2026-01-23 11:52:10.279 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 11:52:10 compute-0 nova_compute[185173]: 2026-01-23 11:52:10.280 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 11:52:10 compute-0 nova_compute[185173]: 2026-01-23 11:52:10.280 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 23 11:52:10 compute-0 nova_compute[185173]: 2026-01-23 11:52:10.404 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:52:10 compute-0 nova_compute[185173]: 2026-01-23 11:52:10.478 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:52:10 compute-0 nova_compute[185173]: 2026-01-23 11:52:10.479 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:52:10 compute-0 nova_compute[185173]: 2026-01-23 11:52:10.538 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:52:10 compute-0 nova_compute[185173]: 2026-01-23 11:52:10.543 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:52:10 compute-0 nova_compute[185173]: 2026-01-23 11:52:10.602 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.eph0 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:52:10 compute-0 nova_compute[185173]: 2026-01-23 11:52:10.603 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:52:10 compute-0 nova_compute[185173]: 2026-01-23 11:52:10.659 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.eph0 --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:52:10 compute-0 nova_compute[185173]: 2026-01-23 11:52:10.665 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/84b3f69a-6ab7-406d-939b-a485518755a5/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:52:10 compute-0 nova_compute[185173]: 2026-01-23 11:52:10.726 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/84b3f69a-6ab7-406d-939b-a485518755a5/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:52:10 compute-0 nova_compute[185173]: 2026-01-23 11:52:10.728 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/84b3f69a-6ab7-406d-939b-a485518755a5/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:52:10 compute-0 nova_compute[185173]: 2026-01-23 11:52:10.789 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/84b3f69a-6ab7-406d-939b-a485518755a5/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:52:10 compute-0 nova_compute[185173]: 2026-01-23 11:52:10.794 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/84b3f69a-6ab7-406d-939b-a485518755a5/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:52:10 compute-0 nova_compute[185173]: 2026-01-23 11:52:10.853 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/84b3f69a-6ab7-406d-939b-a485518755a5/disk.eph0 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:52:10 compute-0 nova_compute[185173]: 2026-01-23 11:52:10.854 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/84b3f69a-6ab7-406d-939b-a485518755a5/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:52:10 compute-0 nova_compute[185173]: 2026-01-23 11:52:10.910 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/84b3f69a-6ab7-406d-939b-a485518755a5/disk.eph0 --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:52:11 compute-0 nova_compute[185173]: 2026-01-23 11:52:11.211 185177 WARNING nova.virt.libvirt.driver [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 23 11:52:11 compute-0 nova_compute[185173]: 2026-01-23 11:52:11.212 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4940MB free_disk=72.40069198608398GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 23 11:52:11 compute-0 nova_compute[185173]: 2026-01-23 11:52:11.213 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 11:52:11 compute-0 nova_compute[185173]: 2026-01-23 11:52:11.213 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 11:52:11 compute-0 nova_compute[185173]: 2026-01-23 11:52:11.330 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Instance 55846fbf-a87a-4cba-be0b-23125d3d9ef4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 23 11:52:11 compute-0 nova_compute[185173]: 2026-01-23 11:52:11.331 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Instance 84b3f69a-6ab7-406d-939b-a485518755a5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 23 11:52:11 compute-0 nova_compute[185173]: 2026-01-23 11:52:11.331 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 23 11:52:11 compute-0 nova_compute[185173]: 2026-01-23 11:52:11.332 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 23 11:52:11 compute-0 nova_compute[185173]: 2026-01-23 11:52:11.354 185177 DEBUG nova.scheduler.client.report [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Refreshing inventories for resource provider 77dd020c-2f5c-40b0-b660-8a95a28aabbd _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 23 11:52:11 compute-0 nova_compute[185173]: 2026-01-23 11:52:11.381 185177 DEBUG nova.scheduler.client.report [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Updating ProviderTree inventory for provider 77dd020c-2f5c-40b0-b660-8a95a28aabbd from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 23 11:52:11 compute-0 nova_compute[185173]: 2026-01-23 11:52:11.382 185177 DEBUG nova.compute.provider_tree [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Updating inventory in ProviderTree for provider 77dd020c-2f5c-40b0-b660-8a95a28aabbd with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 23 11:52:11 compute-0 nova_compute[185173]: 2026-01-23 11:52:11.405 185177 DEBUG nova.scheduler.client.report [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Refreshing aggregate associations for resource provider 77dd020c-2f5c-40b0-b660-8a95a28aabbd, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 23 11:52:11 compute-0 nova_compute[185173]: 2026-01-23 11:52:11.428 185177 DEBUG nova.scheduler.client.report [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Refreshing trait associations for resource provider 77dd020c-2f5c-40b0-b660-8a95a28aabbd, traits: HW_CPU_X86_F16C,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_IDE,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_TRUSTED_CERTS,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_BMI,HW_CPU_X86_FMA3,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE42,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_ABM,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_AVX2,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_AVX,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SSE,HW_CPU_X86_BMI2,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE4A,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_MMX,HW_CPU_X86_SSE41,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_USB _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 23 11:52:11 compute-0 nova_compute[185173]: 2026-01-23 11:52:11.500 185177 DEBUG nova.compute.provider_tree [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Inventory has not changed in ProviderTree for provider: 77dd020c-2f5c-40b0-b660-8a95a28aabbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 23 11:52:11 compute-0 nova_compute[185173]: 2026-01-23 11:52:11.543 185177 DEBUG nova.scheduler.client.report [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Inventory has not changed for provider 77dd020c-2f5c-40b0-b660-8a95a28aabbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 23 11:52:11 compute-0 nova_compute[185173]: 2026-01-23 11:52:11.547 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 23 11:52:11 compute-0 nova_compute[185173]: 2026-01-23 11:52:11.548 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.334s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 11:52:11 compute-0 nova_compute[185173]: 2026-01-23 11:52:11.549 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:52:12 compute-0 nova_compute[185173]: 2026-01-23 11:52:12.261 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:52:12 compute-0 nova_compute[185173]: 2026-01-23 11:52:12.551 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:52:12 compute-0 nova_compute[185173]: 2026-01-23 11:52:12.554 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:52:12 compute-0 nova_compute[185173]: 2026-01-23 11:52:12.581 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:52:12 compute-0 nova_compute[185173]: 2026-01-23 11:52:12.582 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:52:12 compute-0 podman[240293]: 2026-01-23 11:52:12.796232161 +0000 UTC m=+0.128106010 container health_status cde20f10ae383cce1365a41265bac0a75ea71c31a21a1539f187bef9d678e8d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, vcs-type=git, version=9.6, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., name=ubi9-minimal, vendor=Red Hat, Inc., container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=openstack_network_exporter, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, release=1755695350, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Jan 23 11:52:14 compute-0 nova_compute[185173]: 2026-01-23 11:52:14.239 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:52:15 compute-0 nova_compute[185173]: 2026-01-23 11:52:15.236 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:52:15 compute-0 nova_compute[185173]: 2026-01-23 11:52:15.238 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:52:15 compute-0 nova_compute[185173]: 2026-01-23 11:52:15.238 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 23 11:52:16 compute-0 nova_compute[185173]: 2026-01-23 11:52:16.240 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:52:16 compute-0 nova_compute[185173]: 2026-01-23 11:52:16.241 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 23 11:52:16 compute-0 nova_compute[185173]: 2026-01-23 11:52:16.544 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:52:16 compute-0 nova_compute[185173]: 2026-01-23 11:52:16.798 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Acquiring lock "refresh_cache-84b3f69a-6ab7-406d-939b-a485518755a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 23 11:52:16 compute-0 nova_compute[185173]: 2026-01-23 11:52:16.799 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Acquired lock "refresh_cache-84b3f69a-6ab7-406d-939b-a485518755a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 23 11:52:16 compute-0 nova_compute[185173]: 2026-01-23 11:52:16.800 185177 DEBUG nova.network.neutron [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] [instance: 84b3f69a-6ab7-406d-939b-a485518755a5] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 23 11:52:17 compute-0 nova_compute[185173]: 2026-01-23 11:52:17.265 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:52:18 compute-0 nova_compute[185173]: 2026-01-23 11:52:18.564 185177 DEBUG nova.network.neutron [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] [instance: 84b3f69a-6ab7-406d-939b-a485518755a5] Updating instance_info_cache with network_info: [{"id": "05dcc60f-5c09-47f3-9834-3594bf71b68e", "address": "fa:16:3e:40:4f:a6", "network": {"id": "9d2c33ef-0f52-43b5-80dd-899657aece53", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.62", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bd16a0de2f5e4a8480a855ef0e1a3f14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05dcc60f-5c", "ovs_interfaceid": "05dcc60f-5c09-47f3-9834-3594bf71b68e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 23 11:52:18 compute-0 nova_compute[185173]: 2026-01-23 11:52:18.583 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Releasing lock "refresh_cache-84b3f69a-6ab7-406d-939b-a485518755a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 23 11:52:18 compute-0 nova_compute[185173]: 2026-01-23 11:52:18.584 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] [instance: 84b3f69a-6ab7-406d-939b-a485518755a5] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 23 11:52:21 compute-0 nova_compute[185173]: 2026-01-23 11:52:21.547 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:52:21 compute-0 podman[240314]: 2026-01-23 11:52:21.780157619 +0000 UTC m=+0.097431449 container health_status 48bfd3e93cfb033a8917f154ab637a84f3f60f7609564292c230ce848bae7693 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 23 11:52:21 compute-0 podman[240315]: 2026-01-23 11:52:21.793024412 +0000 UTC m=+0.107255101 container health_status 6ec039018dddd109dd56b3f3912ce4a80c166b5fb98c417c5e3cfbbdfbfbeaad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20260120, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true, config_id=ceilometer_agent_compute)
Jan 23 11:52:22 compute-0 nova_compute[185173]: 2026-01-23 11:52:22.269 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:52:23 compute-0 podman[240356]: 2026-01-23 11:52:23.742193045 +0000 UTC m=+0.075586736 container health_status d96827cd9c29e53bbdf4cef10942608e4ba405294733072b4aa624c0238e2ed8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 11:52:25 compute-0 podman[240374]: 2026-01-23 11:52:25.774409758 +0000 UTC m=+0.108246243 container health_status 1cc877fed4914980324cf4c0d6ba23743fd113442cee4d49cc1a59e402757170 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_id=ovn_controller)
Jan 23 11:52:26 compute-0 nova_compute[185173]: 2026-01-23 11:52:26.549 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:52:27 compute-0 nova_compute[185173]: 2026-01-23 11:52:27.272 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:52:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:52:29.096 106832 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 11:52:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:52:29.097 106832 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 11:52:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:52:29.098 106832 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 11:52:29 compute-0 podman[201022]: time="2026-01-23T11:52:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 23 11:52:29 compute-0 podman[201022]: @ - - [23/Jan/2026:11:52:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28508 "" "Go-http-client/1.1"
Jan 23 11:52:29 compute-0 podman[201022]: @ - - [23/Jan/2026:11:52:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4360 "" "Go-http-client/1.1"
Jan 23 11:52:31 compute-0 openstack_network_exporter[204160]: ERROR   11:52:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 23 11:52:31 compute-0 openstack_network_exporter[204160]: 
Jan 23 11:52:31 compute-0 openstack_network_exporter[204160]: ERROR   11:52:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 23 11:52:31 compute-0 openstack_network_exporter[204160]: 
Jan 23 11:52:31 compute-0 nova_compute[185173]: 2026-01-23 11:52:31.553 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:52:32 compute-0 nova_compute[185173]: 2026-01-23 11:52:32.275 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:52:32 compute-0 podman[240398]: 2026-01-23 11:52:32.755609256 +0000 UTC m=+0.088886499 container health_status adf529ba1b6aae11f18bcfacdd7f5850af0b6e6af2250d4a705be9c346f3f5af (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 23 11:52:34 compute-0 podman[240416]: 2026-01-23 11:52:34.772960671 +0000 UTC m=+0.101987357 container health_status 900ef841977ab427bb05b895d10e0cac749b9185cccc7bb7aaf2b3886aa6449a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, build-date=2024-09-18T21:23:30, vcs-type=git, container_name=kepler, release=1214.1726694543, version=9.4, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, managed_by=edpm_ansible, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, config_id=kepler)
Jan 23 11:52:36 compute-0 nova_compute[185173]: 2026-01-23 11:52:36.556 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:52:37 compute-0 nova_compute[185173]: 2026-01-23 11:52:37.278 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:52:38 compute-0 podman[240436]: 2026-01-23 11:52:38.749586756 +0000 UTC m=+0.067584909 container health_status 99ee297e6e25b500e7af118e58bbafc761d2fd7202cdfcf4c976c2a99866b5ef (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 23 11:52:41 compute-0 nova_compute[185173]: 2026-01-23 11:52:41.559 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:52:42 compute-0 nova_compute[185173]: 2026-01-23 11:52:42.281 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:52:43 compute-0 podman[240458]: 2026-01-23 11:52:43.765566371 +0000 UTC m=+0.102035894 container health_status cde20f10ae383cce1365a41265bac0a75ea71c31a21a1539f187bef9d678e8d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, vcs-type=git, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, io.openshift.expose-services=, vendor=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=openstack_network_exporter, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, distribution-scope=public, release=1755695350, architecture=x86_64, managed_by=edpm_ansible)
Jan 23 11:52:46 compute-0 nova_compute[185173]: 2026-01-23 11:52:46.562 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:52:47 compute-0 nova_compute[185173]: 2026-01-23 11:52:47.283 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:52:51 compute-0 nova_compute[185173]: 2026-01-23 11:52:51.565 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:52:52 compute-0 nova_compute[185173]: 2026-01-23 11:52:52.287 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:52:52 compute-0 podman[240482]: 2026-01-23 11:52:52.737610151 +0000 UTC m=+0.070130555 container health_status 6ec039018dddd109dd56b3f3912ce4a80c166b5fb98c417c5e3cfbbdfbfbeaad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260120, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4)
Jan 23 11:52:52 compute-0 podman[240481]: 2026-01-23 11:52:52.745364703 +0000 UTC m=+0.080990194 container health_status 48bfd3e93cfb033a8917f154ab637a84f3f60f7609564292c230ce848bae7693 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 23 11:52:54 compute-0 podman[240521]: 2026-01-23 11:52:54.744368783 +0000 UTC m=+0.064625220 container health_status d96827cd9c29e53bbdf4cef10942608e4ba405294733072b4aa624c0238e2ed8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 23 11:52:56 compute-0 nova_compute[185173]: 2026-01-23 11:52:56.567 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:52:56 compute-0 podman[240540]: 2026-01-23 11:52:56.755570955 +0000 UTC m=+0.094562200 container health_status 1cc877fed4914980324cf4c0d6ba23743fd113442cee4d49cc1a59e402757170 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, managed_by=edpm_ansible)
Jan 23 11:52:57 compute-0 nova_compute[185173]: 2026-01-23 11:52:57.291 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:52:59 compute-0 podman[201022]: time="2026-01-23T11:52:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 23 11:52:59 compute-0 podman[201022]: @ - - [23/Jan/2026:11:52:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28508 "" "Go-http-client/1.1"
Jan 23 11:52:59 compute-0 podman[201022]: @ - - [23/Jan/2026:11:52:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4368 "" "Go-http-client/1.1"
Jan 23 11:53:01 compute-0 openstack_network_exporter[204160]: ERROR   11:53:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 23 11:53:01 compute-0 openstack_network_exporter[204160]: 
Jan 23 11:53:01 compute-0 openstack_network_exporter[204160]: ERROR   11:53:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 23 11:53:01 compute-0 openstack_network_exporter[204160]: 
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.452 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.453 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.453 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2842758a70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.453 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f28410bc7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.454 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be810>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2842758a70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.454 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be840>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2842758a70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.454 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2842758a70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.454 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be8a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2842758a70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.454 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc8f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2842758a70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.454 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be900>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2842758a70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.454 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bf140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2842758a70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.454 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be960>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2842758a70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.455 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2842f61190>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2842758a70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.455 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28411c9190>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2842758a70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.455 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be9c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2842758a70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.455 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bf1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2842758a70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.455 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bec00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2842758a70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.455 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bf440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2842758a70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.455 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bec60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2842758a70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.456 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2842f83560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2842758a70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.456 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2842758a70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.456 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc5c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2842758a70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.456 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2842758a70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.456 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be660>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2842758a70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.456 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2842758a70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.456 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2842758a70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.456 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2842f1af60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2842758a70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.456 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2842758a70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.457 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be7b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2842758a70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.458 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '55846fbf-a87a-4cba-be0b-23125d3d9ef4', 'name': 'test_0', 'flavor': {'id': 'f2c5c5dd-a580-4885-a3ab-a766eac401c8', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'c5833e41-b4db-454e-8f49-014aa18c7dc5'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'bd16a0de2f5e4a8480a855ef0e1a3f14', 'user_id': 'd9858533c2284846a8f0f19a1fb45045', 'hostId': '47f89b8956aaa9163f724166aabd4216eadbb2bd951d24f4c87e1ecb', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.461 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '84b3f69a-6ab7-406d-939b-a485518755a5', 'name': 'vn-i4gqh4k-vr2au76lt4jq-fptc6vwdy3ol-vnf-bciscawcuiyk', 'flavor': {'id': 'f2c5c5dd-a580-4885-a3ab-a766eac401c8', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'c5833e41-b4db-454e-8f49-014aa18c7dc5'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'bd16a0de2f5e4a8480a855ef0e1a3f14', 'user_id': 'd9858533c2284846a8f0f19a1fb45045', 'hostId': '47f89b8956aaa9163f724166aabd4216eadbb2bd951d24f4c87e1ecb', 'status': 'active', 'metadata': {'metering.server_group': '500baa09-1e39-474e-b275-8b2dffe3a65b'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.461 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.461 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bc800>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.461 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bc800>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.461 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.462 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-01-23T11:53:01.461702) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.464 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.467 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.467 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.468 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f28410be7e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.468 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.468 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410be810>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.468 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410be810>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.468 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.468 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-01-23T11:53:01.468482) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.493 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.493 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.493 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.514 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.usage volume: 21364736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.514 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.514 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.515 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.515 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f28411c9b80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.515 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.515 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410be840>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.515 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410be840>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.515 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.515 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-01-23T11:53:01.515591) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.570 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.570 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.570 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:53:01 compute-0 nova_compute[185173]: 2026-01-23 11:53:01.572 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.624 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.write.bytes volume: 41836544 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.625 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.625 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.627 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.627 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f28410bc830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.627 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.627 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f28410be870>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.628 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.628 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410be8a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.628 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410be8a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.628 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.628 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.write.latency volume: 1669208630 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.628 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-01-23T11:53:01.628228) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.628 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.write.latency volume: 8106790 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.628 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.629 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.write.latency volume: 801641355 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.629 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.write.latency volume: 8862519 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.629 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.629 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.630 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f28410bc8c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.630 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.630 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bc8f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.630 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bc8f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.630 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.630 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.630 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.630 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-01-23T11:53:01.630263) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.631 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.631 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f28410be8d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.631 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.631 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410be900>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.631 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410be900>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.631 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.631 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.write.requests volume: 234 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.631 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-01-23T11:53:01.631523) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.631 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.632 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.632 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.write.requests volume: 239 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.632 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.632 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.633 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.633 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f28410bef30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.633 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.633 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bf140>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.633 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bf140>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.633 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.633 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.633 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-01-23T11:53:01.633488) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.633 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.634 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.634 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f28410be930>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.634 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.634 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410be960>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.634 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410be960>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.634 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.635 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.635 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f28410be750>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.635 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.635 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-01-23T11:53:01.634600) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.635 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f2842f61190>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.635 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f2842f61190>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.635 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.635 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.read.latency volume: 639933059 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.635 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.read.latency volume: 72530295 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.636 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.read.latency volume: 43879093 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.636 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.read.latency volume: 363540160 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.636 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.read.latency volume: 61167194 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.636 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.read.latency volume: 48392812 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.637 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.637 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f28411a4c50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.637 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.637 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28411c9190>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.637 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28411c9190>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.637 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-01-23T11:53:01.635557) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.637 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.637 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.637 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.638 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-01-23T11:53:01.637639) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.638 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.638 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.638 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.638 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.639 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.639 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f28410be990>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.639 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.639 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410be9c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.639 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410be9c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.639 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.640 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.640 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f28410bf1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.640 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.640 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bf1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.640 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bf1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.640 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-01-23T11:53:01.639669) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.640 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.640 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.640 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.641 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-01-23T11:53:01.640623) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.641 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.641 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f28410bebd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.641 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.641 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bec00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.641 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bec00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.641 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.642 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-01-23T11:53:01.641767) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.660 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/memory.usage volume: 48.79296875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.678 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/memory.usage volume: 49.18359375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.678 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.678 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f28410bf410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.678 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.678 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bf440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.679 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bf440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.679 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.679 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.679 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-01-23T11:53:01.679170) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.679 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.679 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.680 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f28410bec30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.680 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.680 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bec60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.680 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bec60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.680 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.680 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/network.incoming.bytes volume: 1968 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.680 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-01-23T11:53:01.680454) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.680 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/network.incoming.bytes volume: 4891 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.681 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.681 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f28410bcfb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.681 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.681 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f2842f83560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.681 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f2842f83560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.681 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.681 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/cpu volume: 36370000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.681 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-01-23T11:53:01.681721) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.682 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/cpu volume: 197360000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.682 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.682 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f28410bc920>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.682 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.682 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f28410bc5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.682 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.682 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bc5c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.682 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bc5c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.683 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.683 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/network.incoming.packets volume: 17 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.683 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-01-23T11:53:01.683021) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.683 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/network.incoming.packets volume: 32 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.683 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.683 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f28410bc890>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.684 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.684 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bc650>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.684 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bc650>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.684 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.684 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.684 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.685 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.685 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f28410be720>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.685 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-01-23T11:53:01.684327) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.685 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.685 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410be660>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.685 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410be660>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.685 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.685 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.686 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-01-23T11:53:01.685801) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.686 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.686 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.686 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.686 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.687 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.687 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.687 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f28410bc6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.687 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.687 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bc680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.687 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bc680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.687 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.688 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/network.outgoing.packets volume: 21 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.688 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-01-23T11:53:01.687914) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.688 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/network.outgoing.packets volume: 45 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.688 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.688 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f28410bec90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.688 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.688 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bc6e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.689 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bc6e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.689 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.689 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.689 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.689 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-01-23T11:53:01.689108) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.689 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.690 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f284322b260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.690 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.690 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f2842f1af60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.690 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f2842f1af60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.690 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.690 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.690 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-01-23T11:53:01.690412) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.690 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.691 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.691 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.691 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.691 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.692 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.692 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f28410bc740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.692 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.692 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bc770>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.692 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bc770>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.692 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.692 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/network.outgoing.bytes volume: 2202 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.692 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/network.outgoing.bytes volume: 5004 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.693 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.693 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f28410be780>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.693 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-01-23T11:53:01.692552) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.693 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.693 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410be7b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.693 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410be7b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.694 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.694 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.694 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-01-23T11:53:01.693959) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.694 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.694 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.694 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.695 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.695 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.695 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.696 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.696 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.696 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.696 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.696 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.696 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.696 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.696 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.696 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.696 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.697 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.697 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.697 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.697 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.697 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.697 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.697 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.697 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.697 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.697 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.697 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.697 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.697 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.697 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.697 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:53:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:53:01.698 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:53:02 compute-0 nova_compute[185173]: 2026-01-23 11:53:02.294 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:53:03 compute-0 podman[240567]: 2026-01-23 11:53:03.735849674 +0000 UTC m=+0.063805889 container health_status adf529ba1b6aae11f18bcfacdd7f5850af0b6e6af2250d4a705be9c346f3f5af (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0)
Jan 23 11:53:04 compute-0 nova_compute[185173]: 2026-01-23 11:53:04.235 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:53:05 compute-0 podman[240586]: 2026-01-23 11:53:05.766884366 +0000 UTC m=+0.103209914 container health_status 900ef841977ab427bb05b895d10e0cac749b9185cccc7bb7aaf2b3886aa6449a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., distribution-scope=public, managed_by=edpm_ansible, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, container_name=kepler, io.openshift.expose-services=, vcs-type=git, version=9.4, vendor=Red Hat, Inc., name=ubi9, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, build-date=2024-09-18T21:23:30, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=kepler, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Jan 23 11:53:06 compute-0 nova_compute[185173]: 2026-01-23 11:53:06.575 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:53:07 compute-0 nova_compute[185173]: 2026-01-23 11:53:07.297 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:53:09 compute-0 podman[240607]: 2026-01-23 11:53:09.785088895 +0000 UTC m=+0.105656264 container health_status 99ee297e6e25b500e7af118e58bbafc761d2fd7202cdfcf4c976c2a99866b5ef (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 23 11:53:10 compute-0 nova_compute[185173]: 2026-01-23 11:53:10.250 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:53:10 compute-0 nova_compute[185173]: 2026-01-23 11:53:10.291 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 11:53:10 compute-0 nova_compute[185173]: 2026-01-23 11:53:10.292 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 11:53:10 compute-0 nova_compute[185173]: 2026-01-23 11:53:10.293 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 11:53:10 compute-0 nova_compute[185173]: 2026-01-23 11:53:10.293 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 23 11:53:10 compute-0 nova_compute[185173]: 2026-01-23 11:53:10.397 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:53:10 compute-0 nova_compute[185173]: 2026-01-23 11:53:10.458 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:53:10 compute-0 nova_compute[185173]: 2026-01-23 11:53:10.459 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:53:10 compute-0 nova_compute[185173]: 2026-01-23 11:53:10.517 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:53:10 compute-0 nova_compute[185173]: 2026-01-23 11:53:10.519 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:53:10 compute-0 nova_compute[185173]: 2026-01-23 11:53:10.614 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.eph0 --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:53:10 compute-0 nova_compute[185173]: 2026-01-23 11:53:10.616 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:53:10 compute-0 nova_compute[185173]: 2026-01-23 11:53:10.675 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.eph0 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:53:10 compute-0 nova_compute[185173]: 2026-01-23 11:53:10.681 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/84b3f69a-6ab7-406d-939b-a485518755a5/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:53:10 compute-0 nova_compute[185173]: 2026-01-23 11:53:10.755 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/84b3f69a-6ab7-406d-939b-a485518755a5/disk --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:53:10 compute-0 nova_compute[185173]: 2026-01-23 11:53:10.757 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/84b3f69a-6ab7-406d-939b-a485518755a5/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:53:10 compute-0 nova_compute[185173]: 2026-01-23 11:53:10.814 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/84b3f69a-6ab7-406d-939b-a485518755a5/disk --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:53:10 compute-0 nova_compute[185173]: 2026-01-23 11:53:10.816 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/84b3f69a-6ab7-406d-939b-a485518755a5/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:53:10 compute-0 nova_compute[185173]: 2026-01-23 11:53:10.897 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/84b3f69a-6ab7-406d-939b-a485518755a5/disk.eph0 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:53:10 compute-0 nova_compute[185173]: 2026-01-23 11:53:10.899 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/84b3f69a-6ab7-406d-939b-a485518755a5/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:53:10 compute-0 nova_compute[185173]: 2026-01-23 11:53:10.973 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/84b3f69a-6ab7-406d-939b-a485518755a5/disk.eph0 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:53:11 compute-0 nova_compute[185173]: 2026-01-23 11:53:11.306 185177 WARNING nova.virt.libvirt.driver [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 23 11:53:11 compute-0 nova_compute[185173]: 2026-01-23 11:53:11.307 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4943MB free_disk=72.40072250366211GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 23 11:53:11 compute-0 nova_compute[185173]: 2026-01-23 11:53:11.308 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 11:53:11 compute-0 nova_compute[185173]: 2026-01-23 11:53:11.308 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 11:53:11 compute-0 nova_compute[185173]: 2026-01-23 11:53:11.505 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Instance 55846fbf-a87a-4cba-be0b-23125d3d9ef4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 23 11:53:11 compute-0 nova_compute[185173]: 2026-01-23 11:53:11.505 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Instance 84b3f69a-6ab7-406d-939b-a485518755a5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 23 11:53:11 compute-0 nova_compute[185173]: 2026-01-23 11:53:11.506 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 23 11:53:11 compute-0 nova_compute[185173]: 2026-01-23 11:53:11.506 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 23 11:53:11 compute-0 nova_compute[185173]: 2026-01-23 11:53:11.577 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:53:11 compute-0 nova_compute[185173]: 2026-01-23 11:53:11.698 185177 DEBUG nova.compute.provider_tree [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Inventory has not changed in ProviderTree for provider: 77dd020c-2f5c-40b0-b660-8a95a28aabbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 23 11:53:11 compute-0 nova_compute[185173]: 2026-01-23 11:53:11.721 185177 DEBUG nova.scheduler.client.report [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Inventory has not changed for provider 77dd020c-2f5c-40b0-b660-8a95a28aabbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 23 11:53:11 compute-0 nova_compute[185173]: 2026-01-23 11:53:11.722 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 23 11:53:11 compute-0 nova_compute[185173]: 2026-01-23 11:53:11.723 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.415s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 11:53:12 compute-0 nova_compute[185173]: 2026-01-23 11:53:12.300 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:53:12 compute-0 nova_compute[185173]: 2026-01-23 11:53:12.708 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:53:12 compute-0 nova_compute[185173]: 2026-01-23 11:53:12.709 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:53:12 compute-0 nova_compute[185173]: 2026-01-23 11:53:12.710 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:53:13 compute-0 nova_compute[185173]: 2026-01-23 11:53:13.235 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:53:13 compute-0 nova_compute[185173]: 2026-01-23 11:53:13.235 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:53:13 compute-0 nova_compute[185173]: 2026-01-23 11:53:13.236 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 23 11:53:14 compute-0 podman[240654]: 2026-01-23 11:53:14.748997834 +0000 UTC m=+0.077116278 container health_status cde20f10ae383cce1365a41265bac0a75ea71c31a21a1539f187bef9d678e8d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, maintainer=Red Hat, Inc., release=1755695350, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=openstack_network_exporter, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, name=ubi9-minimal, container_name=openstack_network_exporter, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Jan 23 11:53:15 compute-0 nova_compute[185173]: 2026-01-23 11:53:15.295 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:53:15 compute-0 nova_compute[185173]: 2026-01-23 11:53:15.296 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 23 11:53:16 compute-0 nova_compute[185173]: 2026-01-23 11:53:16.237 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:53:16 compute-0 nova_compute[185173]: 2026-01-23 11:53:16.238 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 23 11:53:16 compute-0 nova_compute[185173]: 2026-01-23 11:53:16.238 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 23 11:53:16 compute-0 nova_compute[185173]: 2026-01-23 11:53:16.577 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:53:16 compute-0 nova_compute[185173]: 2026-01-23 11:53:16.839 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Acquiring lock "refresh_cache-55846fbf-a87a-4cba-be0b-23125d3d9ef4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 23 11:53:16 compute-0 nova_compute[185173]: 2026-01-23 11:53:16.839 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Acquired lock "refresh_cache-55846fbf-a87a-4cba-be0b-23125d3d9ef4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 23 11:53:16 compute-0 nova_compute[185173]: 2026-01-23 11:53:16.840 185177 DEBUG nova.network.neutron [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] [instance: 55846fbf-a87a-4cba-be0b-23125d3d9ef4] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 23 11:53:16 compute-0 nova_compute[185173]: 2026-01-23 11:53:16.840 185177 DEBUG nova.objects.instance [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 55846fbf-a87a-4cba-be0b-23125d3d9ef4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 23 11:53:17 compute-0 nova_compute[185173]: 2026-01-23 11:53:17.303 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:53:18 compute-0 nova_compute[185173]: 2026-01-23 11:53:18.467 185177 DEBUG nova.network.neutron [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] [instance: 55846fbf-a87a-4cba-be0b-23125d3d9ef4] Updating instance_info_cache with network_info: [{"id": "4c18896b-ecf0-4d1b-b901-f24edce45c11", "address": "fa:16:3e:e4:21:a1", "network": {"id": "9d2c33ef-0f52-43b5-80dd-899657aece53", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.65", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bd16a0de2f5e4a8480a855ef0e1a3f14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c18896b-ec", "ovs_interfaceid": "4c18896b-ecf0-4d1b-b901-f24edce45c11", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 23 11:53:18 compute-0 nova_compute[185173]: 2026-01-23 11:53:18.487 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Releasing lock "refresh_cache-55846fbf-a87a-4cba-be0b-23125d3d9ef4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 23 11:53:18 compute-0 nova_compute[185173]: 2026-01-23 11:53:18.487 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] [instance: 55846fbf-a87a-4cba-be0b-23125d3d9ef4] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 23 11:53:18 compute-0 nova_compute[185173]: 2026-01-23 11:53:18.488 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:53:18 compute-0 nova_compute[185173]: 2026-01-23 11:53:18.488 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:53:18 compute-0 nova_compute[185173]: 2026-01-23 11:53:18.489 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:53:18 compute-0 nova_compute[185173]: 2026-01-23 11:53:18.489 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 23 11:53:18 compute-0 nova_compute[185173]: 2026-01-23 11:53:18.505 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 23 11:53:19 compute-0 sshd-session[240675]: Connection closed by 45.148.10.240 port 57092
Jan 23 11:53:21 compute-0 nova_compute[185173]: 2026-01-23 11:53:21.580 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:53:22 compute-0 nova_compute[185173]: 2026-01-23 11:53:22.307 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:53:23 compute-0 podman[240678]: 2026-01-23 11:53:23.740899895 +0000 UTC m=+0.064873415 container health_status 6ec039018dddd109dd56b3f3912ce4a80c166b5fb98c417c5e3cfbbdfbfbeaad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_id=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260120, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4)
Jan 23 11:53:23 compute-0 podman[240677]: 2026-01-23 11:53:23.751076947 +0000 UTC m=+0.079952039 container health_status 48bfd3e93cfb033a8917f154ab637a84f3f60f7609564292c230ce848bae7693 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 23 11:53:25 compute-0 podman[240718]: 2026-01-23 11:53:25.761785216 +0000 UTC m=+0.092724704 container health_status d96827cd9c29e53bbdf4cef10942608e4ba405294733072b4aa624c0238e2ed8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 11:53:26 compute-0 nova_compute[185173]: 2026-01-23 11:53:26.582 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:53:27 compute-0 nova_compute[185173]: 2026-01-23 11:53:27.310 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:53:27 compute-0 podman[240735]: 2026-01-23 11:53:27.77225957 +0000 UTC m=+0.108468083 container health_status 1cc877fed4914980324cf4c0d6ba23743fd113442cee4d49cc1a59e402757170 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Jan 23 11:53:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:53:29.097 106832 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 11:53:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:53:29.098 106832 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 11:53:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:53:29.098 106832 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 11:53:29 compute-0 podman[201022]: time="2026-01-23T11:53:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 23 11:53:29 compute-0 podman[201022]: @ - - [23/Jan/2026:11:53:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28508 "" "Go-http-client/1.1"
Jan 23 11:53:29 compute-0 podman[201022]: @ - - [23/Jan/2026:11:53:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4366 "" "Go-http-client/1.1"
Jan 23 11:53:31 compute-0 openstack_network_exporter[204160]: ERROR   11:53:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 23 11:53:31 compute-0 openstack_network_exporter[204160]: 
Jan 23 11:53:31 compute-0 openstack_network_exporter[204160]: ERROR   11:53:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 23 11:53:31 compute-0 openstack_network_exporter[204160]: 
Jan 23 11:53:31 compute-0 nova_compute[185173]: 2026-01-23 11:53:31.583 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:53:32 compute-0 nova_compute[185173]: 2026-01-23 11:53:32.314 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:53:34 compute-0 podman[240762]: 2026-01-23 11:53:34.736970604 +0000 UTC m=+0.070490184 container health_status adf529ba1b6aae11f18bcfacdd7f5850af0b6e6af2250d4a705be9c346f3f5af (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Jan 23 11:53:36 compute-0 nova_compute[185173]: 2026-01-23 11:53:36.589 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:53:36 compute-0 podman[240783]: 2026-01-23 11:53:36.737240205 +0000 UTC m=+0.067924521 container health_status 900ef841977ab427bb05b895d10e0cac749b9185cccc7bb7aaf2b3886aa6449a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, version=9.4, name=ubi9, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., managed_by=edpm_ansible, config_id=kepler, vendor=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, release=1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.expose-services=, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler)
Jan 23 11:53:37 compute-0 nova_compute[185173]: 2026-01-23 11:53:37.319 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:53:40 compute-0 podman[240803]: 2026-01-23 11:53:40.719378343 +0000 UTC m=+0.053174947 container health_status 99ee297e6e25b500e7af118e58bbafc761d2fd7202cdfcf4c976c2a99866b5ef (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 23 11:53:41 compute-0 nova_compute[185173]: 2026-01-23 11:53:41.593 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:53:42 compute-0 nova_compute[185173]: 2026-01-23 11:53:42.323 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:53:45 compute-0 podman[240827]: 2026-01-23 11:53:45.742704681 +0000 UTC m=+0.069353706 container health_status cde20f10ae383cce1365a41265bac0a75ea71c31a21a1539f187bef9d678e8d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, version=9.6, name=ubi9-minimal, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, config_id=openstack_network_exporter, distribution-scope=public, vcs-type=git, release=1755695350, io.openshift.expose-services=, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Jan 23 11:53:46 compute-0 nova_compute[185173]: 2026-01-23 11:53:46.594 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:53:47 compute-0 nova_compute[185173]: 2026-01-23 11:53:47.325 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:53:51 compute-0 nova_compute[185173]: 2026-01-23 11:53:51.597 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:53:52 compute-0 nova_compute[185173]: 2026-01-23 11:53:52.327 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:53:54 compute-0 podman[240848]: 2026-01-23 11:53:54.716414032 +0000 UTC m=+0.051735351 container health_status 48bfd3e93cfb033a8917f154ab637a84f3f60f7609564292c230ce848bae7693 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Jan 23 11:53:54 compute-0 podman[240849]: 2026-01-23 11:53:54.74017384 +0000 UTC m=+0.068069185 container health_status 6ec039018dddd109dd56b3f3912ce4a80c166b5fb98c417c5e3cfbbdfbfbeaad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, config_id=ceilometer_agent_compute, org.label-schema.build-date=20260120, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Jan 23 11:53:56 compute-0 nova_compute[185173]: 2026-01-23 11:53:56.600 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:53:56 compute-0 podman[240892]: 2026-01-23 11:53:56.718257563 +0000 UTC m=+0.057235087 container health_status d96827cd9c29e53bbdf4cef10942608e4ba405294733072b4aa624c0238e2ed8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 23 11:53:57 compute-0 nova_compute[185173]: 2026-01-23 11:53:57.329 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:53:58 compute-0 podman[240911]: 2026-01-23 11:53:58.755252592 +0000 UTC m=+0.090390507 container health_status 1cc877fed4914980324cf4c0d6ba23743fd113442cee4d49cc1a59e402757170 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 11:53:59 compute-0 podman[201022]: time="2026-01-23T11:53:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 23 11:53:59 compute-0 podman[201022]: @ - - [23/Jan/2026:11:53:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28508 "" "Go-http-client/1.1"
Jan 23 11:53:59 compute-0 podman[201022]: @ - - [23/Jan/2026:11:53:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4365 "" "Go-http-client/1.1"
Jan 23 11:54:01 compute-0 openstack_network_exporter[204160]: ERROR   11:54:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 23 11:54:01 compute-0 openstack_network_exporter[204160]: 
Jan 23 11:54:01 compute-0 openstack_network_exporter[204160]: ERROR   11:54:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 23 11:54:01 compute-0 openstack_network_exporter[204160]: 
Jan 23 11:54:01 compute-0 nova_compute[185173]: 2026-01-23 11:54:01.600 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:54:02 compute-0 nova_compute[185173]: 2026-01-23 11:54:02.332 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:54:05 compute-0 podman[240938]: 2026-01-23 11:54:05.73509888 +0000 UTC m=+0.068606928 container health_status adf529ba1b6aae11f18bcfacdd7f5850af0b6e6af2250d4a705be9c346f3f5af (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 23 11:54:06 compute-0 nova_compute[185173]: 2026-01-23 11:54:06.602 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:54:07 compute-0 nova_compute[185173]: 2026-01-23 11:54:07.335 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:54:07 compute-0 podman[240957]: 2026-01-23 11:54:07.778086368 +0000 UTC m=+0.107226283 container health_status 900ef841977ab427bb05b895d10e0cac749b9185cccc7bb7aaf2b3886aa6449a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=kepler, version=9.4, maintainer=Red Hat, Inc., release=1214.1726694543, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, architecture=x86_64, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, managed_by=edpm_ansible, distribution-scope=public, io.openshift.tags=base rhel9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9.)
Jan 23 11:54:10 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:54:10.146 106832 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:21:44', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '86:2e:09:c4:2a:53'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 23 11:54:10 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:54:10.147 106832 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 23 11:54:10 compute-0 nova_compute[185173]: 2026-01-23 11:54:10.148 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:54:11 compute-0 nova_compute[185173]: 2026-01-23 11:54:11.603 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:54:11 compute-0 podman[240976]: 2026-01-23 11:54:11.725922978 +0000 UTC m=+0.058305523 container health_status 99ee297e6e25b500e7af118e58bbafc761d2fd7202cdfcf4c976c2a99866b5ef (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 23 11:54:12 compute-0 nova_compute[185173]: 2026-01-23 11:54:12.253 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:54:12 compute-0 nova_compute[185173]: 2026-01-23 11:54:12.254 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:54:12 compute-0 nova_compute[185173]: 2026-01-23 11:54:12.293 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 11:54:12 compute-0 nova_compute[185173]: 2026-01-23 11:54:12.294 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 11:54:12 compute-0 nova_compute[185173]: 2026-01-23 11:54:12.294 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 11:54:12 compute-0 nova_compute[185173]: 2026-01-23 11:54:12.294 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 23 11:54:12 compute-0 nova_compute[185173]: 2026-01-23 11:54:12.338 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:54:12 compute-0 nova_compute[185173]: 2026-01-23 11:54:12.388 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:54:12 compute-0 nova_compute[185173]: 2026-01-23 11:54:12.449 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:54:12 compute-0 nova_compute[185173]: 2026-01-23 11:54:12.450 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:54:12 compute-0 nova_compute[185173]: 2026-01-23 11:54:12.507 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:54:12 compute-0 nova_compute[185173]: 2026-01-23 11:54:12.508 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:54:12 compute-0 nova_compute[185173]: 2026-01-23 11:54:12.564 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.eph0 --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:54:12 compute-0 nova_compute[185173]: 2026-01-23 11:54:12.565 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:54:12 compute-0 nova_compute[185173]: 2026-01-23 11:54:12.653 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.eph0 --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:54:12 compute-0 nova_compute[185173]: 2026-01-23 11:54:12.660 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/84b3f69a-6ab7-406d-939b-a485518755a5/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:54:12 compute-0 nova_compute[185173]: 2026-01-23 11:54:12.731 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/84b3f69a-6ab7-406d-939b-a485518755a5/disk --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:54:12 compute-0 nova_compute[185173]: 2026-01-23 11:54:12.732 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/84b3f69a-6ab7-406d-939b-a485518755a5/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:54:12 compute-0 nova_compute[185173]: 2026-01-23 11:54:12.796 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/84b3f69a-6ab7-406d-939b-a485518755a5/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:54:12 compute-0 nova_compute[185173]: 2026-01-23 11:54:12.805 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/84b3f69a-6ab7-406d-939b-a485518755a5/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:54:12 compute-0 nova_compute[185173]: 2026-01-23 11:54:12.860 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/84b3f69a-6ab7-406d-939b-a485518755a5/disk.eph0 --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:54:12 compute-0 nova_compute[185173]: 2026-01-23 11:54:12.861 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/84b3f69a-6ab7-406d-939b-a485518755a5/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:54:12 compute-0 nova_compute[185173]: 2026-01-23 11:54:12.938 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/84b3f69a-6ab7-406d-939b-a485518755a5/disk.eph0 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:54:13 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:54:13.148 106832 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9a136bfd-345f-428f-a7f6-d55531120214, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 23 11:54:13 compute-0 nova_compute[185173]: 2026-01-23 11:54:13.266 185177 WARNING nova.virt.libvirt.driver [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 23 11:54:13 compute-0 nova_compute[185173]: 2026-01-23 11:54:13.268 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4950MB free_disk=72.40254974365234GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 23 11:54:13 compute-0 nova_compute[185173]: 2026-01-23 11:54:13.268 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 11:54:13 compute-0 nova_compute[185173]: 2026-01-23 11:54:13.268 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 11:54:13 compute-0 nova_compute[185173]: 2026-01-23 11:54:13.366 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Instance 55846fbf-a87a-4cba-be0b-23125d3d9ef4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 23 11:54:13 compute-0 nova_compute[185173]: 2026-01-23 11:54:13.367 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Instance 84b3f69a-6ab7-406d-939b-a485518755a5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 23 11:54:13 compute-0 nova_compute[185173]: 2026-01-23 11:54:13.367 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 23 11:54:13 compute-0 nova_compute[185173]: 2026-01-23 11:54:13.368 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 23 11:54:13 compute-0 nova_compute[185173]: 2026-01-23 11:54:13.499 185177 DEBUG nova.compute.provider_tree [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Inventory has not changed in ProviderTree for provider: 77dd020c-2f5c-40b0-b660-8a95a28aabbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 23 11:54:13 compute-0 nova_compute[185173]: 2026-01-23 11:54:13.521 185177 DEBUG nova.scheduler.client.report [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Inventory has not changed for provider 77dd020c-2f5c-40b0-b660-8a95a28aabbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 23 11:54:13 compute-0 nova_compute[185173]: 2026-01-23 11:54:13.523 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 23 11:54:13 compute-0 nova_compute[185173]: 2026-01-23 11:54:13.523 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.255s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 11:54:14 compute-0 nova_compute[185173]: 2026-01-23 11:54:14.501 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:54:14 compute-0 nova_compute[185173]: 2026-01-23 11:54:14.502 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:54:14 compute-0 nova_compute[185173]: 2026-01-23 11:54:14.531 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:54:14 compute-0 nova_compute[185173]: 2026-01-23 11:54:14.532 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:54:15 compute-0 nova_compute[185173]: 2026-01-23 11:54:15.235 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:54:15 compute-0 nova_compute[185173]: 2026-01-23 11:54:15.236 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 23 11:54:16 compute-0 nova_compute[185173]: 2026-01-23 11:54:16.237 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:54:16 compute-0 nova_compute[185173]: 2026-01-23 11:54:16.237 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 23 11:54:16 compute-0 nova_compute[185173]: 2026-01-23 11:54:16.607 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:54:16 compute-0 podman[241023]: 2026-01-23 11:54:16.729918178 +0000 UTC m=+0.067181033 container health_status cde20f10ae383cce1365a41265bac0a75ea71c31a21a1539f187bef9d678e8d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2025-08-20T13:12:41, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, version=9.6, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, config_id=openstack_network_exporter, distribution-scope=public, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, io.buildah.version=1.33.7)
Jan 23 11:54:16 compute-0 nova_compute[185173]: 2026-01-23 11:54:16.854 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Acquiring lock "refresh_cache-84b3f69a-6ab7-406d-939b-a485518755a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 23 11:54:16 compute-0 nova_compute[185173]: 2026-01-23 11:54:16.854 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Acquired lock "refresh_cache-84b3f69a-6ab7-406d-939b-a485518755a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 23 11:54:16 compute-0 nova_compute[185173]: 2026-01-23 11:54:16.855 185177 DEBUG nova.network.neutron [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] [instance: 84b3f69a-6ab7-406d-939b-a485518755a5] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 23 11:54:17 compute-0 nova_compute[185173]: 2026-01-23 11:54:17.340 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:54:17 compute-0 nova_compute[185173]: 2026-01-23 11:54:17.442 185177 DEBUG oslo_concurrency.lockutils [None req-67cf1cd3-2a63-45b2-a016-021a18ad1fa2 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Acquiring lock "ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 11:54:17 compute-0 nova_compute[185173]: 2026-01-23 11:54:17.443 185177 DEBUG oslo_concurrency.lockutils [None req-67cf1cd3-2a63-45b2-a016-021a18ad1fa2 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Lock "ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 11:54:17 compute-0 nova_compute[185173]: 2026-01-23 11:54:17.464 185177 DEBUG nova.compute.manager [None req-67cf1cd3-2a63-45b2-a016-021a18ad1fa2 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 23 11:54:17 compute-0 nova_compute[185173]: 2026-01-23 11:54:17.546 185177 DEBUG oslo_concurrency.lockutils [None req-67cf1cd3-2a63-45b2-a016-021a18ad1fa2 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 11:54:17 compute-0 nova_compute[185173]: 2026-01-23 11:54:17.547 185177 DEBUG oslo_concurrency.lockutils [None req-67cf1cd3-2a63-45b2-a016-021a18ad1fa2 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 11:54:17 compute-0 nova_compute[185173]: 2026-01-23 11:54:17.554 185177 DEBUG nova.virt.hardware [None req-67cf1cd3-2a63-45b2-a016-021a18ad1fa2 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 23 11:54:17 compute-0 nova_compute[185173]: 2026-01-23 11:54:17.555 185177 INFO nova.compute.claims [None req-67cf1cd3-2a63-45b2-a016-021a18ad1fa2 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e] Claim successful on node compute-0.ctlplane.example.com
Jan 23 11:54:17 compute-0 nova_compute[185173]: 2026-01-23 11:54:17.714 185177 DEBUG nova.compute.provider_tree [None req-67cf1cd3-2a63-45b2-a016-021a18ad1fa2 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Inventory has not changed in ProviderTree for provider: 77dd020c-2f5c-40b0-b660-8a95a28aabbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 23 11:54:17 compute-0 nova_compute[185173]: 2026-01-23 11:54:17.732 185177 DEBUG nova.scheduler.client.report [None req-67cf1cd3-2a63-45b2-a016-021a18ad1fa2 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Inventory has not changed for provider 77dd020c-2f5c-40b0-b660-8a95a28aabbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 23 11:54:17 compute-0 nova_compute[185173]: 2026-01-23 11:54:17.757 185177 DEBUG oslo_concurrency.lockutils [None req-67cf1cd3-2a63-45b2-a016-021a18ad1fa2 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.210s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 11:54:17 compute-0 nova_compute[185173]: 2026-01-23 11:54:17.758 185177 DEBUG nova.compute.manager [None req-67cf1cd3-2a63-45b2-a016-021a18ad1fa2 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 23 11:54:17 compute-0 nova_compute[185173]: 2026-01-23 11:54:17.806 185177 DEBUG nova.compute.manager [None req-67cf1cd3-2a63-45b2-a016-021a18ad1fa2 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 23 11:54:17 compute-0 nova_compute[185173]: 2026-01-23 11:54:17.807 185177 DEBUG nova.network.neutron [None req-67cf1cd3-2a63-45b2-a016-021a18ad1fa2 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 23 11:54:17 compute-0 nova_compute[185173]: 2026-01-23 11:54:17.824 185177 INFO nova.virt.libvirt.driver [None req-67cf1cd3-2a63-45b2-a016-021a18ad1fa2 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 23 11:54:17 compute-0 nova_compute[185173]: 2026-01-23 11:54:17.860 185177 DEBUG nova.compute.manager [None req-67cf1cd3-2a63-45b2-a016-021a18ad1fa2 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 23 11:54:18 compute-0 nova_compute[185173]: 2026-01-23 11:54:18.000 185177 DEBUG nova.compute.manager [None req-67cf1cd3-2a63-45b2-a016-021a18ad1fa2 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 23 11:54:18 compute-0 nova_compute[185173]: 2026-01-23 11:54:18.002 185177 DEBUG nova.virt.libvirt.driver [None req-67cf1cd3-2a63-45b2-a016-021a18ad1fa2 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 23 11:54:18 compute-0 nova_compute[185173]: 2026-01-23 11:54:18.002 185177 INFO nova.virt.libvirt.driver [None req-67cf1cd3-2a63-45b2-a016-021a18ad1fa2 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e] Creating image(s)
Jan 23 11:54:18 compute-0 nova_compute[185173]: 2026-01-23 11:54:18.003 185177 DEBUG oslo_concurrency.lockutils [None req-67cf1cd3-2a63-45b2-a016-021a18ad1fa2 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Acquiring lock "/var/lib/nova/instances/ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 11:54:18 compute-0 nova_compute[185173]: 2026-01-23 11:54:18.004 185177 DEBUG oslo_concurrency.lockutils [None req-67cf1cd3-2a63-45b2-a016-021a18ad1fa2 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Lock "/var/lib/nova/instances/ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 11:54:18 compute-0 nova_compute[185173]: 2026-01-23 11:54:18.005 185177 DEBUG oslo_concurrency.lockutils [None req-67cf1cd3-2a63-45b2-a016-021a18ad1fa2 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Lock "/var/lib/nova/instances/ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 11:54:18 compute-0 nova_compute[185173]: 2026-01-23 11:54:18.022 185177 DEBUG oslo_concurrency.processutils [None req-67cf1cd3-2a63-45b2-a016-021a18ad1fa2 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/80c014b261205a8ef2db68f438805c389e810b13 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:54:18 compute-0 nova_compute[185173]: 2026-01-23 11:54:18.043 185177 DEBUG nova.network.neutron [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] [instance: 84b3f69a-6ab7-406d-939b-a485518755a5] Updating instance_info_cache with network_info: [{"id": "05dcc60f-5c09-47f3-9834-3594bf71b68e", "address": "fa:16:3e:40:4f:a6", "network": {"id": "9d2c33ef-0f52-43b5-80dd-899657aece53", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.62", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bd16a0de2f5e4a8480a855ef0e1a3f14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05dcc60f-5c", "ovs_interfaceid": "05dcc60f-5c09-47f3-9834-3594bf71b68e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 23 11:54:18 compute-0 nova_compute[185173]: 2026-01-23 11:54:18.059 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Releasing lock "refresh_cache-84b3f69a-6ab7-406d-939b-a485518755a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 23 11:54:18 compute-0 nova_compute[185173]: 2026-01-23 11:54:18.059 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] [instance: 84b3f69a-6ab7-406d-939b-a485518755a5] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 23 11:54:18 compute-0 nova_compute[185173]: 2026-01-23 11:54:18.079 185177 DEBUG oslo_concurrency.processutils [None req-67cf1cd3-2a63-45b2-a016-021a18ad1fa2 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/80c014b261205a8ef2db68f438805c389e810b13 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:54:18 compute-0 nova_compute[185173]: 2026-01-23 11:54:18.080 185177 DEBUG oslo_concurrency.lockutils [None req-67cf1cd3-2a63-45b2-a016-021a18ad1fa2 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Acquiring lock "80c014b261205a8ef2db68f438805c389e810b13" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 11:54:18 compute-0 nova_compute[185173]: 2026-01-23 11:54:18.081 185177 DEBUG oslo_concurrency.lockutils [None req-67cf1cd3-2a63-45b2-a016-021a18ad1fa2 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Lock "80c014b261205a8ef2db68f438805c389e810b13" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 11:54:18 compute-0 nova_compute[185173]: 2026-01-23 11:54:18.096 185177 DEBUG oslo_concurrency.processutils [None req-67cf1cd3-2a63-45b2-a016-021a18ad1fa2 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/80c014b261205a8ef2db68f438805c389e810b13 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:54:18 compute-0 nova_compute[185173]: 2026-01-23 11:54:18.158 185177 DEBUG oslo_concurrency.processutils [None req-67cf1cd3-2a63-45b2-a016-021a18ad1fa2 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/80c014b261205a8ef2db68f438805c389e810b13 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:54:18 compute-0 nova_compute[185173]: 2026-01-23 11:54:18.159 185177 DEBUG oslo_concurrency.processutils [None req-67cf1cd3-2a63-45b2-a016-021a18ad1fa2 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/80c014b261205a8ef2db68f438805c389e810b13,backing_fmt=raw /var/lib/nova/instances/ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:54:18 compute-0 nova_compute[185173]: 2026-01-23 11:54:18.204 185177 DEBUG oslo_concurrency.processutils [None req-67cf1cd3-2a63-45b2-a016-021a18ad1fa2 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/80c014b261205a8ef2db68f438805c389e810b13,backing_fmt=raw /var/lib/nova/instances/ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk 1073741824" returned: 0 in 0.045s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:54:18 compute-0 nova_compute[185173]: 2026-01-23 11:54:18.205 185177 DEBUG oslo_concurrency.lockutils [None req-67cf1cd3-2a63-45b2-a016-021a18ad1fa2 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Lock "80c014b261205a8ef2db68f438805c389e810b13" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.124s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 11:54:18 compute-0 nova_compute[185173]: 2026-01-23 11:54:18.206 185177 DEBUG oslo_concurrency.processutils [None req-67cf1cd3-2a63-45b2-a016-021a18ad1fa2 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/80c014b261205a8ef2db68f438805c389e810b13 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:54:18 compute-0 nova_compute[185173]: 2026-01-23 11:54:18.234 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:54:18 compute-0 nova_compute[185173]: 2026-01-23 11:54:18.235 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:54:18 compute-0 nova_compute[185173]: 2026-01-23 11:54:18.264 185177 DEBUG oslo_concurrency.processutils [None req-67cf1cd3-2a63-45b2-a016-021a18ad1fa2 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/80c014b261205a8ef2db68f438805c389e810b13 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:54:18 compute-0 nova_compute[185173]: 2026-01-23 11:54:18.266 185177 DEBUG nova.virt.disk.api [None req-67cf1cd3-2a63-45b2-a016-021a18ad1fa2 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Checking if we can resize image /var/lib/nova/instances/ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Jan 23 11:54:18 compute-0 nova_compute[185173]: 2026-01-23 11:54:18.267 185177 DEBUG oslo_concurrency.processutils [None req-67cf1cd3-2a63-45b2-a016-021a18ad1fa2 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:54:18 compute-0 nova_compute[185173]: 2026-01-23 11:54:18.324 185177 DEBUG oslo_concurrency.processutils [None req-67cf1cd3-2a63-45b2-a016-021a18ad1fa2 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:54:18 compute-0 nova_compute[185173]: 2026-01-23 11:54:18.326 185177 DEBUG nova.virt.disk.api [None req-67cf1cd3-2a63-45b2-a016-021a18ad1fa2 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Cannot resize image /var/lib/nova/instances/ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Jan 23 11:54:18 compute-0 nova_compute[185173]: 2026-01-23 11:54:18.326 185177 DEBUG nova.objects.instance [None req-67cf1cd3-2a63-45b2-a016-021a18ad1fa2 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Lazy-loading 'migration_context' on Instance uuid ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 23 11:54:18 compute-0 nova_compute[185173]: 2026-01-23 11:54:18.343 185177 DEBUG oslo_concurrency.lockutils [None req-67cf1cd3-2a63-45b2-a016-021a18ad1fa2 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Acquiring lock "/var/lib/nova/instances/ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 11:54:18 compute-0 nova_compute[185173]: 2026-01-23 11:54:18.343 185177 DEBUG oslo_concurrency.lockutils [None req-67cf1cd3-2a63-45b2-a016-021a18ad1fa2 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Lock "/var/lib/nova/instances/ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 11:54:18 compute-0 nova_compute[185173]: 2026-01-23 11:54:18.344 185177 DEBUG oslo_concurrency.lockutils [None req-67cf1cd3-2a63-45b2-a016-021a18ad1fa2 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Lock "/var/lib/nova/instances/ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 11:54:18 compute-0 nova_compute[185173]: 2026-01-23 11:54:18.361 185177 DEBUG oslo_concurrency.processutils [None req-67cf1cd3-2a63-45b2-a016-021a18ad1fa2 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:54:18 compute-0 nova_compute[185173]: 2026-01-23 11:54:18.421 185177 DEBUG oslo_concurrency.processutils [None req-67cf1cd3-2a63-45b2-a016-021a18ad1fa2 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:54:18 compute-0 nova_compute[185173]: 2026-01-23 11:54:18.422 185177 DEBUG oslo_concurrency.lockutils [None req-67cf1cd3-2a63-45b2-a016-021a18ad1fa2 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 11:54:18 compute-0 nova_compute[185173]: 2026-01-23 11:54:18.423 185177 DEBUG oslo_concurrency.lockutils [None req-67cf1cd3-2a63-45b2-a016-021a18ad1fa2 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 11:54:18 compute-0 nova_compute[185173]: 2026-01-23 11:54:18.438 185177 DEBUG oslo_concurrency.processutils [None req-67cf1cd3-2a63-45b2-a016-021a18ad1fa2 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:54:18 compute-0 nova_compute[185173]: 2026-01-23 11:54:18.496 185177 DEBUG oslo_concurrency.processutils [None req-67cf1cd3-2a63-45b2-a016-021a18ad1fa2 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:54:18 compute-0 nova_compute[185173]: 2026-01-23 11:54:18.498 185177 DEBUG oslo_concurrency.processutils [None req-67cf1cd3-2a63-45b2-a016-021a18ad1fa2 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.eph0 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:54:18 compute-0 nova_compute[185173]: 2026-01-23 11:54:18.911 185177 DEBUG nova.network.neutron [None req-67cf1cd3-2a63-45b2-a016-021a18ad1fa2 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e] Successfully updated port: b9b63bb2-5fc6-48b1-8945-ac43ce6e954e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 23 11:54:18 compute-0 nova_compute[185173]: 2026-01-23 11:54:18.934 185177 DEBUG oslo_concurrency.lockutils [None req-67cf1cd3-2a63-45b2-a016-021a18ad1fa2 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Acquiring lock "refresh_cache-ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 23 11:54:18 compute-0 nova_compute[185173]: 2026-01-23 11:54:18.935 185177 DEBUG oslo_concurrency.lockutils [None req-67cf1cd3-2a63-45b2-a016-021a18ad1fa2 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Acquired lock "refresh_cache-ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 23 11:54:18 compute-0 nova_compute[185173]: 2026-01-23 11:54:18.935 185177 DEBUG nova.network.neutron [None req-67cf1cd3-2a63-45b2-a016-021a18ad1fa2 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 23 11:54:19 compute-0 nova_compute[185173]: 2026-01-23 11:54:19.029 185177 DEBUG nova.compute.manager [req-d73e2c02-5c05-4a1d-add3-da480e6e9b32 req-f43b3248-1534-4231-a8f0-0bd96afd9deb e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] [instance: ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e] Received event network-changed-b9b63bb2-5fc6-48b1-8945-ac43ce6e954e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 23 11:54:19 compute-0 nova_compute[185173]: 2026-01-23 11:54:19.030 185177 DEBUG nova.compute.manager [req-d73e2c02-5c05-4a1d-add3-da480e6e9b32 req-f43b3248-1534-4231-a8f0-0bd96afd9deb e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] [instance: ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e] Refreshing instance network info cache due to event network-changed-b9b63bb2-5fc6-48b1-8945-ac43ce6e954e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 23 11:54:19 compute-0 nova_compute[185173]: 2026-01-23 11:54:19.030 185177 DEBUG oslo_concurrency.lockutils [req-d73e2c02-5c05-4a1d-add3-da480e6e9b32 req-f43b3248-1534-4231-a8f0-0bd96afd9deb e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] Acquiring lock "refresh_cache-ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 23 11:54:19 compute-0 nova_compute[185173]: 2026-01-23 11:54:19.153 185177 DEBUG nova.network.neutron [None req-67cf1cd3-2a63-45b2-a016-021a18ad1fa2 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 23 11:54:19 compute-0 nova_compute[185173]: 2026-01-23 11:54:19.565 185177 DEBUG oslo_concurrency.processutils [None req-67cf1cd3-2a63-45b2-a016-021a18ad1fa2 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.eph0 1073741824" returned: 0 in 1.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:54:19 compute-0 nova_compute[185173]: 2026-01-23 11:54:19.566 185177 DEBUG oslo_concurrency.lockutils [None req-67cf1cd3-2a63-45b2-a016-021a18ad1fa2 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 1.143s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 11:54:19 compute-0 nova_compute[185173]: 2026-01-23 11:54:19.567 185177 DEBUG oslo_concurrency.processutils [None req-67cf1cd3-2a63-45b2-a016-021a18ad1fa2 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:54:19 compute-0 nova_compute[185173]: 2026-01-23 11:54:19.625 185177 DEBUG oslo_concurrency.processutils [None req-67cf1cd3-2a63-45b2-a016-021a18ad1fa2 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:54:19 compute-0 nova_compute[185173]: 2026-01-23 11:54:19.626 185177 DEBUG nova.virt.libvirt.driver [None req-67cf1cd3-2a63-45b2-a016-021a18ad1fa2 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 23 11:54:19 compute-0 nova_compute[185173]: 2026-01-23 11:54:19.627 185177 DEBUG nova.virt.libvirt.driver [None req-67cf1cd3-2a63-45b2-a016-021a18ad1fa2 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e] Ensure instance console log exists: /var/lib/nova/instances/ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 23 11:54:19 compute-0 nova_compute[185173]: 2026-01-23 11:54:19.627 185177 DEBUG oslo_concurrency.lockutils [None req-67cf1cd3-2a63-45b2-a016-021a18ad1fa2 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 11:54:19 compute-0 nova_compute[185173]: 2026-01-23 11:54:19.628 185177 DEBUG oslo_concurrency.lockutils [None req-67cf1cd3-2a63-45b2-a016-021a18ad1fa2 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 11:54:19 compute-0 nova_compute[185173]: 2026-01-23 11:54:19.628 185177 DEBUG oslo_concurrency.lockutils [None req-67cf1cd3-2a63-45b2-a016-021a18ad1fa2 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 11:54:20 compute-0 nova_compute[185173]: 2026-01-23 11:54:20.292 185177 DEBUG nova.network.neutron [None req-67cf1cd3-2a63-45b2-a016-021a18ad1fa2 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e] Updating instance_info_cache with network_info: [{"id": "b9b63bb2-5fc6-48b1-8945-ac43ce6e954e", "address": "fa:16:3e:fa:bc:bc", "network": {"id": "9d2c33ef-0f52-43b5-80dd-899657aece53", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.99", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bd16a0de2f5e4a8480a855ef0e1a3f14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9b63bb2-5f", "ovs_interfaceid": "b9b63bb2-5fc6-48b1-8945-ac43ce6e954e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 23 11:54:20 compute-0 nova_compute[185173]: 2026-01-23 11:54:20.317 185177 DEBUG oslo_concurrency.lockutils [None req-67cf1cd3-2a63-45b2-a016-021a18ad1fa2 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Releasing lock "refresh_cache-ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 23 11:54:20 compute-0 nova_compute[185173]: 2026-01-23 11:54:20.318 185177 DEBUG nova.compute.manager [None req-67cf1cd3-2a63-45b2-a016-021a18ad1fa2 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e] Instance network_info: |[{"id": "b9b63bb2-5fc6-48b1-8945-ac43ce6e954e", "address": "fa:16:3e:fa:bc:bc", "network": {"id": "9d2c33ef-0f52-43b5-80dd-899657aece53", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.99", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bd16a0de2f5e4a8480a855ef0e1a3f14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9b63bb2-5f", "ovs_interfaceid": "b9b63bb2-5fc6-48b1-8945-ac43ce6e954e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 23 11:54:20 compute-0 nova_compute[185173]: 2026-01-23 11:54:20.319 185177 DEBUG oslo_concurrency.lockutils [req-d73e2c02-5c05-4a1d-add3-da480e6e9b32 req-f43b3248-1534-4231-a8f0-0bd96afd9deb e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] Acquired lock "refresh_cache-ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 23 11:54:20 compute-0 nova_compute[185173]: 2026-01-23 11:54:20.319 185177 DEBUG nova.network.neutron [req-d73e2c02-5c05-4a1d-add3-da480e6e9b32 req-f43b3248-1534-4231-a8f0-0bd96afd9deb e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] [instance: ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e] Refreshing network info cache for port b9b63bb2-5fc6-48b1-8945-ac43ce6e954e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 23 11:54:20 compute-0 nova_compute[185173]: 2026-01-23 11:54:20.322 185177 DEBUG nova.virt.libvirt.driver [None req-67cf1cd3-2a63-45b2-a016-021a18ad1fa2 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e] Start _get_guest_xml network_info=[{"id": "b9b63bb2-5fc6-48b1-8945-ac43ce6e954e", "address": "fa:16:3e:fa:bc:bc", "network": {"id": "9d2c33ef-0f52-43b5-80dd-899657aece53", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.99", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bd16a0de2f5e4a8480a855ef0e1a3f14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9b63bb2-5f", "ovs_interfaceid": "b9b63bb2-5fc6-48b1-8945-ac43ce6e954e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2026-01-23T11:45:38Z,direct_url=<?>,disk_format='qcow2',id=c5833e41-b4db-454e-8f49-014aa18c7dc5,min_disk=0,min_ram=0,name='cirros',owner='bd16a0de2f5e4a8480a855ef0e1a3f14',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2026-01-23T11:45:39Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'disk_bus': 'virtio', 'encrypted': False, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'size': 0, 'encryption_options': None, 'encryption_secret_uuid': None, 'boot_index': 0, 'image_id': 'c5833e41-b4db-454e-8f49-014aa18c7dc5'}], 'ephemerals': [{'encryption_secret_uuid': None, 'encryption_format': None, 'disk_bus': 'virtio', 'encrypted': False, 'device_type': 'disk', 'device_name': '/dev/vdb', 'size': 1, 'encryption_options': None, 'guest_format': None}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 23 11:54:20 compute-0 nova_compute[185173]: 2026-01-23 11:54:20.330 185177 WARNING nova.virt.libvirt.driver [None req-67cf1cd3-2a63-45b2-a016-021a18ad1fa2 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 23 11:54:20 compute-0 nova_compute[185173]: 2026-01-23 11:54:20.343 185177 DEBUG nova.virt.libvirt.host [None req-67cf1cd3-2a63-45b2-a016-021a18ad1fa2 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 23 11:54:20 compute-0 nova_compute[185173]: 2026-01-23 11:54:20.344 185177 DEBUG nova.virt.libvirt.host [None req-67cf1cd3-2a63-45b2-a016-021a18ad1fa2 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 23 11:54:20 compute-0 nova_compute[185173]: 2026-01-23 11:54:20.348 185177 DEBUG nova.virt.libvirt.host [None req-67cf1cd3-2a63-45b2-a016-021a18ad1fa2 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 23 11:54:20 compute-0 nova_compute[185173]: 2026-01-23 11:54:20.349 185177 DEBUG nova.virt.libvirt.host [None req-67cf1cd3-2a63-45b2-a016-021a18ad1fa2 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 23 11:54:20 compute-0 nova_compute[185173]: 2026-01-23 11:54:20.350 185177 DEBUG nova.virt.libvirt.driver [None req-67cf1cd3-2a63-45b2-a016-021a18ad1fa2 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 23 11:54:20 compute-0 nova_compute[185173]: 2026-01-23 11:54:20.353 185177 DEBUG nova.virt.hardware [None req-67cf1cd3-2a63-45b2-a016-021a18ad1fa2 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-23T11:45:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='f2c5c5dd-a580-4885-a3ab-a766eac401c8',id=1,is_public=True,memory_mb=512,name='m1.small',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2026-01-23T11:45:38Z,direct_url=<?>,disk_format='qcow2',id=c5833e41-b4db-454e-8f49-014aa18c7dc5,min_disk=0,min_ram=0,name='cirros',owner='bd16a0de2f5e4a8480a855ef0e1a3f14',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2026-01-23T11:45:39Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 23 11:54:20 compute-0 nova_compute[185173]: 2026-01-23 11:54:20.354 185177 DEBUG nova.virt.hardware [None req-67cf1cd3-2a63-45b2-a016-021a18ad1fa2 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 23 11:54:20 compute-0 nova_compute[185173]: 2026-01-23 11:54:20.354 185177 DEBUG nova.virt.hardware [None req-67cf1cd3-2a63-45b2-a016-021a18ad1fa2 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 23 11:54:20 compute-0 nova_compute[185173]: 2026-01-23 11:54:20.355 185177 DEBUG nova.virt.hardware [None req-67cf1cd3-2a63-45b2-a016-021a18ad1fa2 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 23 11:54:20 compute-0 nova_compute[185173]: 2026-01-23 11:54:20.355 185177 DEBUG nova.virt.hardware [None req-67cf1cd3-2a63-45b2-a016-021a18ad1fa2 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 23 11:54:20 compute-0 nova_compute[185173]: 2026-01-23 11:54:20.356 185177 DEBUG nova.virt.hardware [None req-67cf1cd3-2a63-45b2-a016-021a18ad1fa2 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 23 11:54:20 compute-0 nova_compute[185173]: 2026-01-23 11:54:20.356 185177 DEBUG nova.virt.hardware [None req-67cf1cd3-2a63-45b2-a016-021a18ad1fa2 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 23 11:54:20 compute-0 nova_compute[185173]: 2026-01-23 11:54:20.357 185177 DEBUG nova.virt.hardware [None req-67cf1cd3-2a63-45b2-a016-021a18ad1fa2 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 23 11:54:20 compute-0 nova_compute[185173]: 2026-01-23 11:54:20.357 185177 DEBUG nova.virt.hardware [None req-67cf1cd3-2a63-45b2-a016-021a18ad1fa2 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 23 11:54:20 compute-0 nova_compute[185173]: 2026-01-23 11:54:20.358 185177 DEBUG nova.virt.hardware [None req-67cf1cd3-2a63-45b2-a016-021a18ad1fa2 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 23 11:54:20 compute-0 nova_compute[185173]: 2026-01-23 11:54:20.358 185177 DEBUG nova.virt.hardware [None req-67cf1cd3-2a63-45b2-a016-021a18ad1fa2 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 23 11:54:20 compute-0 nova_compute[185173]: 2026-01-23 11:54:20.365 185177 DEBUG nova.virt.libvirt.vif [None req-67cf1cd3-2a63-45b2-a016-021a18ad1fa2 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-23T11:54:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-i4gqh4k-nwnahxa6hq2y-lqyj7kfebyqq-vnf-dcwk4osqlplv',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-i4gqh4k-nwnahxa6hq2y-lqyj7kfebyqq-vnf-dcwk4osqlplv',id=3,image_ref='c5833e41-b4db-454e-8f49-014aa18c7dc5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='500baa09-1e39-474e-b275-8b2dffe3a65b'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bd16a0de2f5e4a8480a855ef0e1a3f14',ramdisk_id='',reservation_id='r-l5s1i5hh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,admin,reader',image_base_image_ref='c5833e41-b4db-454e-8f49-014aa18c7dc5',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T11:54:17Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT01OTQxNzU4MzMyNTQ1NjEyNjcxPT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTU5NDE3NTgzMzI1NDU2MTI2NzE9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09NTk0MTc1ODMzMjU0NTYxMjY3MT09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTU5NDE3NTgzMzI1NDU2MTI2NzE9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT01OTQxNzU4MzMyNTQ1NjEyNjcxPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT01OTQxNzU4MzMyNTQ1NjEyNjcxPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJnc
Jan 23 11:54:20 compute-0 nova_compute[185173]: ywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09NTk0MTc1ODMzMjU0NTYxMjY3MT09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTU5NDE3NTgzMzI1NDU2MTI2NzE9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT01OTQxNzU4MzMyNTQ1NjEyNjcxPT0tLQo=',user_id='d9858533c2284846a8f0f19a1fb45045',uuid=ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b9b63bb2-5fc6-48b1-8945-ac43ce6e954e", "address": "fa:16:3e:fa:bc:bc", "network": {"id": "9d2c33ef-0f52-43b5-80dd-899657aece53", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.99", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bd16a0de2f5e4a8480a855ef0e1a3f14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9b63bb2-5f", "ovs_interfaceid": "b9b63bb2-5fc6-48b1-8945-ac43ce6e954e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 23 11:54:20 compute-0 nova_compute[185173]: 2026-01-23 11:54:20.366 185177 DEBUG nova.network.os_vif_util [None req-67cf1cd3-2a63-45b2-a016-021a18ad1fa2 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Converting VIF {"id": "b9b63bb2-5fc6-48b1-8945-ac43ce6e954e", "address": "fa:16:3e:fa:bc:bc", "network": {"id": "9d2c33ef-0f52-43b5-80dd-899657aece53", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.99", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bd16a0de2f5e4a8480a855ef0e1a3f14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9b63bb2-5f", "ovs_interfaceid": "b9b63bb2-5fc6-48b1-8945-ac43ce6e954e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 23 11:54:20 compute-0 nova_compute[185173]: 2026-01-23 11:54:20.367 185177 DEBUG nova.network.os_vif_util [None req-67cf1cd3-2a63-45b2-a016-021a18ad1fa2 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fa:bc:bc,bridge_name='br-int',has_traffic_filtering=True,id=b9b63bb2-5fc6-48b1-8945-ac43ce6e954e,network=Network(9d2c33ef-0f52-43b5-80dd-899657aece53),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapb9b63bb2-5f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 23 11:54:20 compute-0 nova_compute[185173]: 2026-01-23 11:54:20.368 185177 DEBUG nova.objects.instance [None req-67cf1cd3-2a63-45b2-a016-021a18ad1fa2 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Lazy-loading 'pci_devices' on Instance uuid ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 23 11:54:20 compute-0 nova_compute[185173]: 2026-01-23 11:54:20.387 185177 DEBUG nova.virt.libvirt.driver [None req-67cf1cd3-2a63-45b2-a016-021a18ad1fa2 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e] End _get_guest_xml xml=<domain type="kvm">
Jan 23 11:54:20 compute-0 nova_compute[185173]:   <uuid>ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e</uuid>
Jan 23 11:54:20 compute-0 nova_compute[185173]:   <name>instance-00000003</name>
Jan 23 11:54:20 compute-0 nova_compute[185173]:   <memory>524288</memory>
Jan 23 11:54:20 compute-0 nova_compute[185173]:   <vcpu>1</vcpu>
Jan 23 11:54:20 compute-0 nova_compute[185173]:   <metadata>
Jan 23 11:54:20 compute-0 nova_compute[185173]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 23 11:54:20 compute-0 nova_compute[185173]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 23 11:54:20 compute-0 nova_compute[185173]:       <nova:name>vn-i4gqh4k-nwnahxa6hq2y-lqyj7kfebyqq-vnf-dcwk4osqlplv</nova:name>
Jan 23 11:54:20 compute-0 nova_compute[185173]:       <nova:creationTime>2026-01-23 11:54:20</nova:creationTime>
Jan 23 11:54:20 compute-0 nova_compute[185173]:       <nova:flavor name="m1.small">
Jan 23 11:54:20 compute-0 nova_compute[185173]:         <nova:memory>512</nova:memory>
Jan 23 11:54:20 compute-0 nova_compute[185173]:         <nova:disk>1</nova:disk>
Jan 23 11:54:20 compute-0 nova_compute[185173]:         <nova:swap>0</nova:swap>
Jan 23 11:54:20 compute-0 nova_compute[185173]:         <nova:ephemeral>1</nova:ephemeral>
Jan 23 11:54:20 compute-0 nova_compute[185173]:         <nova:vcpus>1</nova:vcpus>
Jan 23 11:54:20 compute-0 nova_compute[185173]:       </nova:flavor>
Jan 23 11:54:20 compute-0 nova_compute[185173]:       <nova:owner>
Jan 23 11:54:20 compute-0 nova_compute[185173]:         <nova:user uuid="d9858533c2284846a8f0f19a1fb45045">admin</nova:user>
Jan 23 11:54:20 compute-0 nova_compute[185173]:         <nova:project uuid="bd16a0de2f5e4a8480a855ef0e1a3f14">admin</nova:project>
Jan 23 11:54:20 compute-0 nova_compute[185173]:       </nova:owner>
Jan 23 11:54:20 compute-0 nova_compute[185173]:       <nova:root type="image" uuid="c5833e41-b4db-454e-8f49-014aa18c7dc5"/>
Jan 23 11:54:20 compute-0 nova_compute[185173]:       <nova:ports>
Jan 23 11:54:20 compute-0 nova_compute[185173]:         <nova:port uuid="b9b63bb2-5fc6-48b1-8945-ac43ce6e954e">
Jan 23 11:54:20 compute-0 nova_compute[185173]:           <nova:ip type="fixed" address="192.168.0.99" ipVersion="4"/>
Jan 23 11:54:20 compute-0 nova_compute[185173]:         </nova:port>
Jan 23 11:54:20 compute-0 nova_compute[185173]:       </nova:ports>
Jan 23 11:54:20 compute-0 nova_compute[185173]:     </nova:instance>
Jan 23 11:54:20 compute-0 nova_compute[185173]:   </metadata>
Jan 23 11:54:20 compute-0 nova_compute[185173]:   <sysinfo type="smbios">
Jan 23 11:54:20 compute-0 nova_compute[185173]:     <system>
Jan 23 11:54:20 compute-0 nova_compute[185173]:       <entry name="manufacturer">RDO</entry>
Jan 23 11:54:20 compute-0 nova_compute[185173]:       <entry name="product">OpenStack Compute</entry>
Jan 23 11:54:20 compute-0 nova_compute[185173]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 23 11:54:20 compute-0 nova_compute[185173]:       <entry name="serial">ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e</entry>
Jan 23 11:54:20 compute-0 nova_compute[185173]:       <entry name="uuid">ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e</entry>
Jan 23 11:54:20 compute-0 nova_compute[185173]:       <entry name="family">Virtual Machine</entry>
Jan 23 11:54:20 compute-0 nova_compute[185173]:     </system>
Jan 23 11:54:20 compute-0 nova_compute[185173]:   </sysinfo>
Jan 23 11:54:20 compute-0 nova_compute[185173]:   <os>
Jan 23 11:54:20 compute-0 nova_compute[185173]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 23 11:54:20 compute-0 nova_compute[185173]:     <boot dev="hd"/>
Jan 23 11:54:20 compute-0 nova_compute[185173]:     <smbios mode="sysinfo"/>
Jan 23 11:54:20 compute-0 nova_compute[185173]:   </os>
Jan 23 11:54:20 compute-0 nova_compute[185173]:   <features>
Jan 23 11:54:20 compute-0 nova_compute[185173]:     <acpi/>
Jan 23 11:54:20 compute-0 nova_compute[185173]:     <apic/>
Jan 23 11:54:20 compute-0 nova_compute[185173]:     <vmcoreinfo/>
Jan 23 11:54:20 compute-0 nova_compute[185173]:   </features>
Jan 23 11:54:20 compute-0 nova_compute[185173]:   <clock offset="utc">
Jan 23 11:54:20 compute-0 nova_compute[185173]:     <timer name="pit" tickpolicy="delay"/>
Jan 23 11:54:20 compute-0 nova_compute[185173]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 23 11:54:20 compute-0 nova_compute[185173]:     <timer name="hpet" present="no"/>
Jan 23 11:54:20 compute-0 nova_compute[185173]:   </clock>
Jan 23 11:54:20 compute-0 nova_compute[185173]:   <cpu mode="host-model" match="exact">
Jan 23 11:54:20 compute-0 nova_compute[185173]:     <topology sockets="1" cores="1" threads="1"/>
Jan 23 11:54:20 compute-0 nova_compute[185173]:   </cpu>
Jan 23 11:54:20 compute-0 nova_compute[185173]:   <devices>
Jan 23 11:54:20 compute-0 nova_compute[185173]:     <disk type="file" device="disk">
Jan 23 11:54:20 compute-0 nova_compute[185173]:       <driver name="qemu" type="qcow2" cache="none"/>
Jan 23 11:54:20 compute-0 nova_compute[185173]:       <source file="/var/lib/nova/instances/ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk"/>
Jan 23 11:54:20 compute-0 nova_compute[185173]:       <target dev="vda" bus="virtio"/>
Jan 23 11:54:20 compute-0 nova_compute[185173]:     </disk>
Jan 23 11:54:20 compute-0 nova_compute[185173]:     <disk type="file" device="disk">
Jan 23 11:54:20 compute-0 nova_compute[185173]:       <driver name="qemu" type="qcow2" cache="none"/>
Jan 23 11:54:20 compute-0 nova_compute[185173]:       <source file="/var/lib/nova/instances/ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.eph0"/>
Jan 23 11:54:20 compute-0 nova_compute[185173]:       <target dev="vdb" bus="virtio"/>
Jan 23 11:54:20 compute-0 nova_compute[185173]:     </disk>
Jan 23 11:54:20 compute-0 nova_compute[185173]:     <disk type="file" device="cdrom">
Jan 23 11:54:20 compute-0 nova_compute[185173]:       <driver name="qemu" type="raw" cache="none"/>
Jan 23 11:54:20 compute-0 nova_compute[185173]:       <source file="/var/lib/nova/instances/ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.config"/>
Jan 23 11:54:20 compute-0 nova_compute[185173]:       <target dev="sda" bus="sata"/>
Jan 23 11:54:20 compute-0 nova_compute[185173]:     </disk>
Jan 23 11:54:20 compute-0 nova_compute[185173]:     <interface type="ethernet">
Jan 23 11:54:20 compute-0 nova_compute[185173]:       <mac address="fa:16:3e:fa:bc:bc"/>
Jan 23 11:54:20 compute-0 nova_compute[185173]:       <model type="virtio"/>
Jan 23 11:54:20 compute-0 nova_compute[185173]:       <driver name="vhost" rx_queue_size="512"/>
Jan 23 11:54:20 compute-0 nova_compute[185173]:       <mtu size="1442"/>
Jan 23 11:54:20 compute-0 nova_compute[185173]:       <target dev="tapb9b63bb2-5f"/>
Jan 23 11:54:20 compute-0 nova_compute[185173]:     </interface>
Jan 23 11:54:20 compute-0 nova_compute[185173]:     <serial type="pty">
Jan 23 11:54:20 compute-0 nova_compute[185173]:       <log file="/var/lib/nova/instances/ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/console.log" append="off"/>
Jan 23 11:54:20 compute-0 nova_compute[185173]:     </serial>
Jan 23 11:54:20 compute-0 nova_compute[185173]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 23 11:54:20 compute-0 nova_compute[185173]:     <video>
Jan 23 11:54:20 compute-0 nova_compute[185173]:       <model type="virtio"/>
Jan 23 11:54:20 compute-0 nova_compute[185173]:     </video>
Jan 23 11:54:20 compute-0 nova_compute[185173]:     <input type="tablet" bus="usb"/>
Jan 23 11:54:20 compute-0 nova_compute[185173]:     <rng model="virtio">
Jan 23 11:54:20 compute-0 nova_compute[185173]:       <backend model="random">/dev/urandom</backend>
Jan 23 11:54:20 compute-0 nova_compute[185173]:     </rng>
Jan 23 11:54:20 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root"/>
Jan 23 11:54:20 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 11:54:20 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 11:54:20 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 11:54:20 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 11:54:20 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 11:54:20 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 11:54:20 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 11:54:20 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 11:54:20 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 11:54:20 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 11:54:20 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 11:54:20 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 11:54:20 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 11:54:20 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 11:54:20 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 11:54:20 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 11:54:20 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 11:54:20 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 11:54:20 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 11:54:20 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 11:54:20 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 11:54:20 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 11:54:20 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 11:54:20 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 11:54:20 compute-0 nova_compute[185173]:     <controller type="usb" index="0"/>
Jan 23 11:54:20 compute-0 nova_compute[185173]:     <memballoon model="virtio">
Jan 23 11:54:20 compute-0 nova_compute[185173]:       <stats period="10"/>
Jan 23 11:54:20 compute-0 nova_compute[185173]:     </memballoon>
Jan 23 11:54:20 compute-0 nova_compute[185173]:   </devices>
Jan 23 11:54:20 compute-0 nova_compute[185173]: </domain>
Jan 23 11:54:20 compute-0 nova_compute[185173]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 23 11:54:20 compute-0 nova_compute[185173]: 2026-01-23 11:54:20.388 185177 DEBUG nova.compute.manager [None req-67cf1cd3-2a63-45b2-a016-021a18ad1fa2 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e] Preparing to wait for external event network-vif-plugged-b9b63bb2-5fc6-48b1-8945-ac43ce6e954e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 23 11:54:20 compute-0 nova_compute[185173]: 2026-01-23 11:54:20.389 185177 DEBUG oslo_concurrency.lockutils [None req-67cf1cd3-2a63-45b2-a016-021a18ad1fa2 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Acquiring lock "ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 11:54:20 compute-0 nova_compute[185173]: 2026-01-23 11:54:20.389 185177 DEBUG oslo_concurrency.lockutils [None req-67cf1cd3-2a63-45b2-a016-021a18ad1fa2 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Lock "ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 11:54:20 compute-0 nova_compute[185173]: 2026-01-23 11:54:20.389 185177 DEBUG oslo_concurrency.lockutils [None req-67cf1cd3-2a63-45b2-a016-021a18ad1fa2 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Lock "ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 11:54:20 compute-0 nova_compute[185173]: 2026-01-23 11:54:20.390 185177 DEBUG nova.virt.libvirt.vif [None req-67cf1cd3-2a63-45b2-a016-021a18ad1fa2 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-23T11:54:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-i4gqh4k-nwnahxa6hq2y-lqyj7kfebyqq-vnf-dcwk4osqlplv',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-i4gqh4k-nwnahxa6hq2y-lqyj7kfebyqq-vnf-dcwk4osqlplv',id=3,image_ref='c5833e41-b4db-454e-8f49-014aa18c7dc5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='500baa09-1e39-474e-b275-8b2dffe3a65b'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bd16a0de2f5e4a8480a855ef0e1a3f14',ramdisk_id='',reservation_id='r-l5s1i5hh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,admin,reader',image_base_image_ref='c5833e41-b4db-454e-8f49-014aa18c7dc5',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T11:54:17Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT01OTQxNzU4MzMyNTQ1NjEyNjcxPT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTU5NDE3NTgzMzI1NDU2MTI2NzE9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09NTk0MTc1ODMzMjU0NTYxMjY3MT09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTU5NDE3NTgzMzI1NDU2MTI2NzE9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT01OTQxNzU4MzMyNTQ1NjEyNjcxPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT01OTQxNzU4MzMyNTQ1NjEyNjcxPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9
Jan 23 11:54:20 compute-0 nova_compute[185173]: wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09NTk0MTc1ODMzMjU0NTYxMjY3MT09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTU5NDE3NTgzMzI1NDU2MTI2NzE9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT01OTQxNzU4MzMyNTQ1NjEyNjcxPT0tLQo=',user_id='d9858533c2284846a8f0f19a1fb45045',uuid=ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b9b63bb2-5fc6-48b1-8945-ac43ce6e954e", "address": "fa:16:3e:fa:bc:bc", "network": {"id": "9d2c33ef-0f52-43b5-80dd-899657aece53", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.99", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bd16a0de2f5e4a8480a855ef0e1a3f14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9b63bb2-5f", "ovs_interfaceid": "b9b63bb2-5fc6-48b1-8945-ac43ce6e954e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 23 11:54:20 compute-0 nova_compute[185173]: 2026-01-23 11:54:20.391 185177 DEBUG nova.network.os_vif_util [None req-67cf1cd3-2a63-45b2-a016-021a18ad1fa2 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Converting VIF {"id": "b9b63bb2-5fc6-48b1-8945-ac43ce6e954e", "address": "fa:16:3e:fa:bc:bc", "network": {"id": "9d2c33ef-0f52-43b5-80dd-899657aece53", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.99", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bd16a0de2f5e4a8480a855ef0e1a3f14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9b63bb2-5f", "ovs_interfaceid": "b9b63bb2-5fc6-48b1-8945-ac43ce6e954e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 23 11:54:20 compute-0 nova_compute[185173]: 2026-01-23 11:54:20.392 185177 DEBUG nova.network.os_vif_util [None req-67cf1cd3-2a63-45b2-a016-021a18ad1fa2 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fa:bc:bc,bridge_name='br-int',has_traffic_filtering=True,id=b9b63bb2-5fc6-48b1-8945-ac43ce6e954e,network=Network(9d2c33ef-0f52-43b5-80dd-899657aece53),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapb9b63bb2-5f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 23 11:54:20 compute-0 nova_compute[185173]: 2026-01-23 11:54:20.392 185177 DEBUG os_vif [None req-67cf1cd3-2a63-45b2-a016-021a18ad1fa2 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fa:bc:bc,bridge_name='br-int',has_traffic_filtering=True,id=b9b63bb2-5fc6-48b1-8945-ac43ce6e954e,network=Network(9d2c33ef-0f52-43b5-80dd-899657aece53),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapb9b63bb2-5f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 23 11:54:20 compute-0 nova_compute[185173]: 2026-01-23 11:54:20.393 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:54:20 compute-0 nova_compute[185173]: 2026-01-23 11:54:20.394 185177 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 23 11:54:20 compute-0 nova_compute[185173]: 2026-01-23 11:54:20.394 185177 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 23 11:54:20 compute-0 nova_compute[185173]: 2026-01-23 11:54:20.398 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:54:20 compute-0 nova_compute[185173]: 2026-01-23 11:54:20.398 185177 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb9b63bb2-5f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 23 11:54:20 compute-0 nova_compute[185173]: 2026-01-23 11:54:20.399 185177 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb9b63bb2-5f, col_values=(('external_ids', {'iface-id': 'b9b63bb2-5fc6-48b1-8945-ac43ce6e954e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:fa:bc:bc', 'vm-uuid': 'ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 23 11:54:20 compute-0 nova_compute[185173]: 2026-01-23 11:54:20.401 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:54:20 compute-0 NetworkManager[56133]: <info>  [1769169260.4026] manager: (tapb9b63bb2-5f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/29)
Jan 23 11:54:20 compute-0 nova_compute[185173]: 2026-01-23 11:54:20.405 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 23 11:54:20 compute-0 nova_compute[185173]: 2026-01-23 11:54:20.411 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:54:20 compute-0 nova_compute[185173]: 2026-01-23 11:54:20.412 185177 INFO os_vif [None req-67cf1cd3-2a63-45b2-a016-021a18ad1fa2 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fa:bc:bc,bridge_name='br-int',has_traffic_filtering=True,id=b9b63bb2-5fc6-48b1-8945-ac43ce6e954e,network=Network(9d2c33ef-0f52-43b5-80dd-899657aece53),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapb9b63bb2-5f')
Jan 23 11:54:20 compute-0 nova_compute[185173]: 2026-01-23 11:54:20.483 185177 DEBUG nova.virt.libvirt.driver [None req-67cf1cd3-2a63-45b2-a016-021a18ad1fa2 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 23 11:54:20 compute-0 nova_compute[185173]: 2026-01-23 11:54:20.483 185177 DEBUG nova.virt.libvirt.driver [None req-67cf1cd3-2a63-45b2-a016-021a18ad1fa2 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 23 11:54:20 compute-0 nova_compute[185173]: 2026-01-23 11:54:20.483 185177 DEBUG nova.virt.libvirt.driver [None req-67cf1cd3-2a63-45b2-a016-021a18ad1fa2 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 23 11:54:20 compute-0 nova_compute[185173]: 2026-01-23 11:54:20.483 185177 DEBUG nova.virt.libvirt.driver [None req-67cf1cd3-2a63-45b2-a016-021a18ad1fa2 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] No VIF found with MAC fa:16:3e:fa:bc:bc, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 23 11:54:20 compute-0 nova_compute[185173]: 2026-01-23 11:54:20.484 185177 INFO nova.virt.libvirt.driver [None req-67cf1cd3-2a63-45b2-a016-021a18ad1fa2 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e] Using config drive
Jan 23 11:54:20 compute-0 rsyslogd[235472]: message too long (8192) with configured size 8096, begin of message is: 2026-01-23 11:54:20.365 185177 DEBUG nova.virt.libvirt.vif [None req-67cf1cd3-2a [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 23 11:54:20 compute-0 rsyslogd[235472]: message too long (8192) with configured size 8096, begin of message is: 2026-01-23 11:54:20.390 185177 DEBUG nova.virt.libvirt.vif [None req-67cf1cd3-2a [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 23 11:54:20 compute-0 nova_compute[185173]: 2026-01-23 11:54:20.864 185177 INFO nova.virt.libvirt.driver [None req-67cf1cd3-2a63-45b2-a016-021a18ad1fa2 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e] Creating config drive at /var/lib/nova/instances/ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.config
Jan 23 11:54:20 compute-0 nova_compute[185173]: 2026-01-23 11:54:20.871 185177 DEBUG oslo_concurrency.processutils [None req-67cf1cd3-2a63-45b2-a016-021a18ad1fa2 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpssxp795q execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:54:21 compute-0 nova_compute[185173]: 2026-01-23 11:54:21.001 185177 DEBUG oslo_concurrency.processutils [None req-67cf1cd3-2a63-45b2-a016-021a18ad1fa2 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpssxp795q" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:54:21 compute-0 NetworkManager[56133]: <info>  [1769169261.0751] manager: (tapb9b63bb2-5f): new Tun device (/org/freedesktop/NetworkManager/Devices/30)
Jan 23 11:54:21 compute-0 kernel: tapb9b63bb2-5f: entered promiscuous mode
Jan 23 11:54:21 compute-0 nova_compute[185173]: 2026-01-23 11:54:21.088 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:54:21 compute-0 ovn_controller[97581]: 2026-01-23T11:54:21Z|00040|binding|INFO|Claiming lport b9b63bb2-5fc6-48b1-8945-ac43ce6e954e for this chassis.
Jan 23 11:54:21 compute-0 ovn_controller[97581]: 2026-01-23T11:54:21Z|00041|binding|INFO|b9b63bb2-5fc6-48b1-8945-ac43ce6e954e: Claiming fa:16:3e:fa:bc:bc 192.168.0.99
Jan 23 11:54:21 compute-0 nova_compute[185173]: 2026-01-23 11:54:21.097 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:54:21 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:54:21.104 106832 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fa:bc:bc 192.168.0.99'], port_security=['fa:16:3e:fa:bc:bc 192.168.0.99'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-wvvtbi4gqh4k-nwnahxa6hq2y-lqyj7kfebyqq-port-bzb33egd64ru', 'neutron:cidrs': '192.168.0.99/24', 'neutron:device_id': 'ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9d2c33ef-0f52-43b5-80dd-899657aece53', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-wvvtbi4gqh4k-nwnahxa6hq2y-lqyj7kfebyqq-port-bzb33egd64ru', 'neutron:project_id': 'bd16a0de2f5e4a8480a855ef0e1a3f14', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd2fa655b-b17a-4411-ab93-c6585edc77dc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.192'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=488b21ee-cabd-4ebf-9089-c8262ea2e5e6, chassis=[<ovs.db.idl.Row object at 0x7fceaba80790>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fceaba80790>], logical_port=b9b63bb2-5fc6-48b1-8945-ac43ce6e954e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 23 11:54:21 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:54:21.105 106832 INFO neutron.agent.ovn.metadata.agent [-] Port b9b63bb2-5fc6-48b1-8945-ac43ce6e954e in datapath 9d2c33ef-0f52-43b5-80dd-899657aece53 bound to our chassis
Jan 23 11:54:21 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:54:21.106 106832 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9d2c33ef-0f52-43b5-80dd-899657aece53
Jan 23 11:54:21 compute-0 ovn_controller[97581]: 2026-01-23T11:54:21Z|00042|binding|INFO|Setting lport b9b63bb2-5fc6-48b1-8945-ac43ce6e954e ovn-installed in OVS
Jan 23 11:54:21 compute-0 ovn_controller[97581]: 2026-01-23T11:54:21Z|00043|binding|INFO|Setting lport b9b63bb2-5fc6-48b1-8945-ac43ce6e954e up in Southbound
Jan 23 11:54:21 compute-0 nova_compute[185173]: 2026-01-23 11:54:21.116 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:54:21 compute-0 systemd-machined[156550]: New machine qemu-3-instance-00000003.
Jan 23 11:54:21 compute-0 systemd[1]: Started Virtual Machine qemu-3-instance-00000003.
Jan 23 11:54:21 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:54:21.156 238267 DEBUG oslo.privsep.daemon [-] privsep: reply[9717415e-2f50-4229-a578-d28a23b4302d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 11:54:21 compute-0 systemd-udevd[241095]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 11:54:21 compute-0 NetworkManager[56133]: <info>  [1769169261.1770] device (tapb9b63bb2-5f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 23 11:54:21 compute-0 NetworkManager[56133]: <info>  [1769169261.1826] device (tapb9b63bb2-5f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 23 11:54:21 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:54:21.197 238300 DEBUG oslo.privsep.daemon [-] privsep: reply[d8aa3326-6425-40c7-a742-4f18def1295c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 11:54:21 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:54:21.200 238300 DEBUG oslo.privsep.daemon [-] privsep: reply[fafb1cd1-a0cd-486f-8397-2f7b8487cb21]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 11:54:21 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:54:21.228 238300 DEBUG oslo.privsep.daemon [-] privsep: reply[9eb66288-8ada-4f72-8dd0-91b507b2b357]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 11:54:21 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:54:21.251 238267 DEBUG oslo.privsep.daemon [-] privsep: reply[704cb386-9fea-48e6-a5d0-bc91c6d638bd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9d2c33ef-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5b:a6:26'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 8, 'rx_bytes': 616, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 8, 'rx_bytes': 616, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 374776, 'reachable_time': 37321, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 241108, 'error': None, 'target': 'ovnmeta-9d2c33ef-0f52-43b5-80dd-899657aece53', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 11:54:21 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:54:21.280 238267 DEBUG oslo.privsep.daemon [-] privsep: reply[7aa1bc93-a998-4b9e-98f5-8ee232999e53]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap9d2c33ef-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 374787, 'tstamp': 374787}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 241109, 'error': None, 'target': 'ovnmeta-9d2c33ef-0f52-43b5-80dd-899657aece53', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap9d2c33ef-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 374789, 'tstamp': 374789}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 241109, 'error': None, 'target': 'ovnmeta-9d2c33ef-0f52-43b5-80dd-899657aece53', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 11:54:21 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:54:21.282 106832 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9d2c33ef-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 23 11:54:21 compute-0 nova_compute[185173]: 2026-01-23 11:54:21.284 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:54:21 compute-0 nova_compute[185173]: 2026-01-23 11:54:21.285 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:54:21 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:54:21.286 106832 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9d2c33ef-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 23 11:54:21 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:54:21.286 106832 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 23 11:54:21 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:54:21.287 106832 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9d2c33ef-00, col_values=(('external_ids', {'iface-id': 'a3c84d66-2ae2-461a-92f2-b9999c7b469e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 23 11:54:21 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:54:21.288 106832 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 23 11:54:21 compute-0 nova_compute[185173]: 2026-01-23 11:54:21.310 185177 DEBUG nova.compute.manager [req-b9e81c3b-6047-4366-86d9-de2c470b40cf req-d5d8e6ce-9fd9-4c16-b021-3664cf95135d e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] [instance: ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e] Received event network-vif-plugged-b9b63bb2-5fc6-48b1-8945-ac43ce6e954e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 23 11:54:21 compute-0 nova_compute[185173]: 2026-01-23 11:54:21.310 185177 DEBUG oslo_concurrency.lockutils [req-b9e81c3b-6047-4366-86d9-de2c470b40cf req-d5d8e6ce-9fd9-4c16-b021-3664cf95135d e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] Acquiring lock "ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 11:54:21 compute-0 nova_compute[185173]: 2026-01-23 11:54:21.311 185177 DEBUG oslo_concurrency.lockutils [req-b9e81c3b-6047-4366-86d9-de2c470b40cf req-d5d8e6ce-9fd9-4c16-b021-3664cf95135d e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] Lock "ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 11:54:21 compute-0 nova_compute[185173]: 2026-01-23 11:54:21.311 185177 DEBUG oslo_concurrency.lockutils [req-b9e81c3b-6047-4366-86d9-de2c470b40cf req-d5d8e6ce-9fd9-4c16-b021-3664cf95135d e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] Lock "ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 11:54:21 compute-0 nova_compute[185173]: 2026-01-23 11:54:21.311 185177 DEBUG nova.compute.manager [req-b9e81c3b-6047-4366-86d9-de2c470b40cf req-d5d8e6ce-9fd9-4c16-b021-3664cf95135d e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] [instance: ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e] Processing event network-vif-plugged-b9b63bb2-5fc6-48b1-8945-ac43ce6e954e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 23 11:54:21 compute-0 nova_compute[185173]: 2026-01-23 11:54:21.369 185177 DEBUG nova.network.neutron [req-d73e2c02-5c05-4a1d-add3-da480e6e9b32 req-f43b3248-1534-4231-a8f0-0bd96afd9deb e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] [instance: ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e] Updated VIF entry in instance network info cache for port b9b63bb2-5fc6-48b1-8945-ac43ce6e954e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 23 11:54:21 compute-0 nova_compute[185173]: 2026-01-23 11:54:21.369 185177 DEBUG nova.network.neutron [req-d73e2c02-5c05-4a1d-add3-da480e6e9b32 req-f43b3248-1534-4231-a8f0-0bd96afd9deb e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] [instance: ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e] Updating instance_info_cache with network_info: [{"id": "b9b63bb2-5fc6-48b1-8945-ac43ce6e954e", "address": "fa:16:3e:fa:bc:bc", "network": {"id": "9d2c33ef-0f52-43b5-80dd-899657aece53", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.99", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bd16a0de2f5e4a8480a855ef0e1a3f14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9b63bb2-5f", "ovs_interfaceid": "b9b63bb2-5fc6-48b1-8945-ac43ce6e954e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 23 11:54:21 compute-0 nova_compute[185173]: 2026-01-23 11:54:21.385 185177 DEBUG oslo_concurrency.lockutils [req-d73e2c02-5c05-4a1d-add3-da480e6e9b32 req-f43b3248-1534-4231-a8f0-0bd96afd9deb e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] Releasing lock "refresh_cache-ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 23 11:54:21 compute-0 nova_compute[185173]: 2026-01-23 11:54:21.565 185177 DEBUG nova.virt.driver [None req-161a4fc3-746d-453a-ae54-40285a29550b - - - - - -] Emitting event <LifecycleEvent: 1769169261.5649533, ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 23 11:54:21 compute-0 nova_compute[185173]: 2026-01-23 11:54:21.566 185177 INFO nova.compute.manager [None req-161a4fc3-746d-453a-ae54-40285a29550b - - - - - -] [instance: ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e] VM Started (Lifecycle Event)
Jan 23 11:54:21 compute-0 nova_compute[185173]: 2026-01-23 11:54:21.568 185177 DEBUG nova.compute.manager [None req-67cf1cd3-2a63-45b2-a016-021a18ad1fa2 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 23 11:54:21 compute-0 nova_compute[185173]: 2026-01-23 11:54:21.577 185177 DEBUG nova.virt.libvirt.driver [None req-67cf1cd3-2a63-45b2-a016-021a18ad1fa2 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 23 11:54:21 compute-0 nova_compute[185173]: 2026-01-23 11:54:21.587 185177 DEBUG nova.compute.manager [None req-161a4fc3-746d-453a-ae54-40285a29550b - - - - - -] [instance: ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 23 11:54:21 compute-0 nova_compute[185173]: 2026-01-23 11:54:21.591 185177 INFO nova.virt.libvirt.driver [-] [instance: ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e] Instance spawned successfully.
Jan 23 11:54:21 compute-0 nova_compute[185173]: 2026-01-23 11:54:21.592 185177 DEBUG nova.virt.libvirt.driver [None req-67cf1cd3-2a63-45b2-a016-021a18ad1fa2 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 23 11:54:21 compute-0 nova_compute[185173]: 2026-01-23 11:54:21.595 185177 DEBUG nova.compute.manager [None req-161a4fc3-746d-453a-ae54-40285a29550b - - - - - -] [instance: ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 23 11:54:21 compute-0 nova_compute[185173]: 2026-01-23 11:54:21.610 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:54:21 compute-0 nova_compute[185173]: 2026-01-23 11:54:21.612 185177 INFO nova.compute.manager [None req-161a4fc3-746d-453a-ae54-40285a29550b - - - - - -] [instance: ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 23 11:54:21 compute-0 nova_compute[185173]: 2026-01-23 11:54:21.613 185177 DEBUG nova.virt.driver [None req-161a4fc3-746d-453a-ae54-40285a29550b - - - - - -] Emitting event <LifecycleEvent: 1769169261.56516, ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 23 11:54:21 compute-0 nova_compute[185173]: 2026-01-23 11:54:21.613 185177 INFO nova.compute.manager [None req-161a4fc3-746d-453a-ae54-40285a29550b - - - - - -] [instance: ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e] VM Paused (Lifecycle Event)
Jan 23 11:54:21 compute-0 nova_compute[185173]: 2026-01-23 11:54:21.619 185177 DEBUG nova.virt.libvirt.driver [None req-67cf1cd3-2a63-45b2-a016-021a18ad1fa2 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 23 11:54:21 compute-0 nova_compute[185173]: 2026-01-23 11:54:21.619 185177 DEBUG nova.virt.libvirt.driver [None req-67cf1cd3-2a63-45b2-a016-021a18ad1fa2 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 23 11:54:21 compute-0 nova_compute[185173]: 2026-01-23 11:54:21.620 185177 DEBUG nova.virt.libvirt.driver [None req-67cf1cd3-2a63-45b2-a016-021a18ad1fa2 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 23 11:54:21 compute-0 nova_compute[185173]: 2026-01-23 11:54:21.620 185177 DEBUG nova.virt.libvirt.driver [None req-67cf1cd3-2a63-45b2-a016-021a18ad1fa2 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 23 11:54:21 compute-0 nova_compute[185173]: 2026-01-23 11:54:21.620 185177 DEBUG nova.virt.libvirt.driver [None req-67cf1cd3-2a63-45b2-a016-021a18ad1fa2 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 23 11:54:21 compute-0 nova_compute[185173]: 2026-01-23 11:54:21.621 185177 DEBUG nova.virt.libvirt.driver [None req-67cf1cd3-2a63-45b2-a016-021a18ad1fa2 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 23 11:54:21 compute-0 nova_compute[185173]: 2026-01-23 11:54:21.630 185177 DEBUG nova.compute.manager [None req-161a4fc3-746d-453a-ae54-40285a29550b - - - - - -] [instance: ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 23 11:54:21 compute-0 nova_compute[185173]: 2026-01-23 11:54:21.637 185177 DEBUG nova.virt.driver [None req-161a4fc3-746d-453a-ae54-40285a29550b - - - - - -] Emitting event <LifecycleEvent: 1769169261.5755358, ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 23 11:54:21 compute-0 nova_compute[185173]: 2026-01-23 11:54:21.637 185177 INFO nova.compute.manager [None req-161a4fc3-746d-453a-ae54-40285a29550b - - - - - -] [instance: ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e] VM Resumed (Lifecycle Event)
Jan 23 11:54:21 compute-0 nova_compute[185173]: 2026-01-23 11:54:21.663 185177 DEBUG nova.compute.manager [None req-161a4fc3-746d-453a-ae54-40285a29550b - - - - - -] [instance: ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 23 11:54:21 compute-0 nova_compute[185173]: 2026-01-23 11:54:21.667 185177 DEBUG nova.compute.manager [None req-161a4fc3-746d-453a-ae54-40285a29550b - - - - - -] [instance: ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 23 11:54:21 compute-0 nova_compute[185173]: 2026-01-23 11:54:21.680 185177 INFO nova.compute.manager [None req-67cf1cd3-2a63-45b2-a016-021a18ad1fa2 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e] Took 3.68 seconds to spawn the instance on the hypervisor.
Jan 23 11:54:21 compute-0 nova_compute[185173]: 2026-01-23 11:54:21.681 185177 DEBUG nova.compute.manager [None req-67cf1cd3-2a63-45b2-a016-021a18ad1fa2 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 23 11:54:21 compute-0 nova_compute[185173]: 2026-01-23 11:54:21.690 185177 INFO nova.compute.manager [None req-161a4fc3-746d-453a-ae54-40285a29550b - - - - - -] [instance: ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 23 11:54:21 compute-0 nova_compute[185173]: 2026-01-23 11:54:21.734 185177 INFO nova.compute.manager [None req-67cf1cd3-2a63-45b2-a016-021a18ad1fa2 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e] Took 4.21 seconds to build instance.
Jan 23 11:54:21 compute-0 nova_compute[185173]: 2026-01-23 11:54:21.747 185177 DEBUG oslo_concurrency.lockutils [None req-67cf1cd3-2a63-45b2-a016-021a18ad1fa2 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Lock "ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.304s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 11:54:23 compute-0 nova_compute[185173]: 2026-01-23 11:54:23.387 185177 DEBUG nova.compute.manager [req-234b8fca-c7fd-461f-835a-de11ccdc3208 req-01ea85b0-920b-4080-b353-c35b8a115de3 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] [instance: ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e] Received event network-vif-plugged-b9b63bb2-5fc6-48b1-8945-ac43ce6e954e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 23 11:54:23 compute-0 nova_compute[185173]: 2026-01-23 11:54:23.387 185177 DEBUG oslo_concurrency.lockutils [req-234b8fca-c7fd-461f-835a-de11ccdc3208 req-01ea85b0-920b-4080-b353-c35b8a115de3 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] Acquiring lock "ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 11:54:23 compute-0 nova_compute[185173]: 2026-01-23 11:54:23.387 185177 DEBUG oslo_concurrency.lockutils [req-234b8fca-c7fd-461f-835a-de11ccdc3208 req-01ea85b0-920b-4080-b353-c35b8a115de3 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] Lock "ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 11:54:23 compute-0 nova_compute[185173]: 2026-01-23 11:54:23.388 185177 DEBUG oslo_concurrency.lockutils [req-234b8fca-c7fd-461f-835a-de11ccdc3208 req-01ea85b0-920b-4080-b353-c35b8a115de3 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] Lock "ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 11:54:23 compute-0 nova_compute[185173]: 2026-01-23 11:54:23.388 185177 DEBUG nova.compute.manager [req-234b8fca-c7fd-461f-835a-de11ccdc3208 req-01ea85b0-920b-4080-b353-c35b8a115de3 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] [instance: ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e] No waiting events found dispatching network-vif-plugged-b9b63bb2-5fc6-48b1-8945-ac43ce6e954e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 23 11:54:23 compute-0 nova_compute[185173]: 2026-01-23 11:54:23.388 185177 WARNING nova.compute.manager [req-234b8fca-c7fd-461f-835a-de11ccdc3208 req-01ea85b0-920b-4080-b353-c35b8a115de3 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] [instance: ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e] Received unexpected event network-vif-plugged-b9b63bb2-5fc6-48b1-8945-ac43ce6e954e for instance with vm_state active and task_state None.
Jan 23 11:54:23 compute-0 systemd[1]: Starting libvirt proxy daemon...
Jan 23 11:54:23 compute-0 systemd[1]: Started libvirt proxy daemon.
Jan 23 11:54:25 compute-0 nova_compute[185173]: 2026-01-23 11:54:25.404 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:54:25 compute-0 podman[241137]: 2026-01-23 11:54:25.787435142 +0000 UTC m=+0.105187053 container health_status 6ec039018dddd109dd56b3f3912ce4a80c166b5fb98c417c5e3cfbbdfbfbeaad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260120, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute)
Jan 23 11:54:25 compute-0 podman[241136]: 2026-01-23 11:54:25.805880168 +0000 UTC m=+0.123264089 container health_status 48bfd3e93cfb033a8917f154ab637a84f3f60f7609564292c230ce848bae7693 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 23 11:54:26 compute-0 nova_compute[185173]: 2026-01-23 11:54:26.610 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:54:27 compute-0 podman[241174]: 2026-01-23 11:54:27.772378174 +0000 UTC m=+0.102609968 container health_status d96827cd9c29e53bbdf4cef10942608e4ba405294733072b4aa624c0238e2ed8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Jan 23 11:54:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:54:29.098 106832 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 11:54:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:54:29.099 106832 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 11:54:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:54:29.099 106832 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 11:54:29 compute-0 podman[201022]: time="2026-01-23T11:54:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 23 11:54:29 compute-0 podman[201022]: @ - - [23/Jan/2026:11:54:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28508 "" "Go-http-client/1.1"
Jan 23 11:54:29 compute-0 podman[201022]: @ - - [23/Jan/2026:11:54:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4373 "" "Go-http-client/1.1"
Jan 23 11:54:29 compute-0 podman[241192]: 2026-01-23 11:54:29.7780495 +0000 UTC m=+0.109802717 container health_status 1cc877fed4914980324cf4c0d6ba23743fd113442cee4d49cc1a59e402757170 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 23 11:54:30 compute-0 nova_compute[185173]: 2026-01-23 11:54:30.406 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:54:31 compute-0 openstack_network_exporter[204160]: ERROR   11:54:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 23 11:54:31 compute-0 openstack_network_exporter[204160]: 
Jan 23 11:54:31 compute-0 openstack_network_exporter[204160]: ERROR   11:54:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 23 11:54:31 compute-0 openstack_network_exporter[204160]: 
Jan 23 11:54:31 compute-0 nova_compute[185173]: 2026-01-23 11:54:31.613 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:54:35 compute-0 nova_compute[185173]: 2026-01-23 11:54:35.411 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:54:36 compute-0 nova_compute[185173]: 2026-01-23 11:54:36.614 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:54:36 compute-0 podman[241218]: 2026-01-23 11:54:36.776847377 +0000 UTC m=+0.104769582 container health_status adf529ba1b6aae11f18bcfacdd7f5850af0b6e6af2250d4a705be9c346f3f5af (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, tcib_managed=true, config_id=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 23 11:54:38 compute-0 podman[241240]: 2026-01-23 11:54:38.755256167 +0000 UTC m=+0.091298669 container health_status 900ef841977ab427bb05b895d10e0cac749b9185cccc7bb7aaf2b3886aa6449a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release=1214.1726694543, distribution-scope=public, vendor=Red Hat, Inc., config_id=kepler, release-0.7.12=, io.openshift.expose-services=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, vcs-type=git, com.redhat.component=ubi9-container, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, maintainer=Red Hat, Inc., architecture=x86_64, name=ubi9, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, io.openshift.tags=base rhel9)
Jan 23 11:54:40 compute-0 nova_compute[185173]: 2026-01-23 11:54:40.416 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:54:41 compute-0 nova_compute[185173]: 2026-01-23 11:54:41.616 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:54:42 compute-0 podman[241260]: 2026-01-23 11:54:42.725519721 +0000 UTC m=+0.059037941 container health_status 99ee297e6e25b500e7af118e58bbafc761d2fd7202cdfcf4c976c2a99866b5ef (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 23 11:54:45 compute-0 nova_compute[185173]: 2026-01-23 11:54:45.422 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:54:46 compute-0 nova_compute[185173]: 2026-01-23 11:54:46.618 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:54:47 compute-0 podman[241284]: 2026-01-23 11:54:47.728516748 +0000 UTC m=+0.064219159 container health_status cde20f10ae383cce1365a41265bac0a75ea71c31a21a1539f187bef9d678e8d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=openstack_network_exporter, name=ubi9-minimal, container_name=openstack_network_exporter, io.buildah.version=1.33.7, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, version=9.6, maintainer=Red Hat, Inc.)
Jan 23 11:54:50 compute-0 nova_compute[185173]: 2026-01-23 11:54:50.426 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:54:51 compute-0 ovn_controller[97581]: 2026-01-23T11:54:51Z|00044|memory_trim|INFO|Detected inactivity (last active 30004 ms ago): trimming memory
Jan 23 11:54:51 compute-0 nova_compute[185173]: 2026-01-23 11:54:51.620 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:54:54 compute-0 ovn_controller[97581]: 2026-01-23T11:54:54Z|00008|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:fa:bc:bc 192.168.0.99
Jan 23 11:54:54 compute-0 ovn_controller[97581]: 2026-01-23T11:54:54Z|00009|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:fa:bc:bc 192.168.0.99
Jan 23 11:54:55 compute-0 nova_compute[185173]: 2026-01-23 11:54:55.430 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:54:56 compute-0 nova_compute[185173]: 2026-01-23 11:54:56.623 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:54:56 compute-0 podman[241313]: 2026-01-23 11:54:56.763092405 +0000 UTC m=+0.097017383 container health_status 48bfd3e93cfb033a8917f154ab637a84f3f60f7609564292c230ce848bae7693 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 23 11:54:56 compute-0 podman[241314]: 2026-01-23 11:54:56.77868744 +0000 UTC m=+0.105639410 container health_status 6ec039018dddd109dd56b3f3912ce4a80c166b5fb98c417c5e3cfbbdfbfbeaad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20260120, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute)
Jan 23 11:54:58 compute-0 podman[241352]: 2026-01-23 11:54:58.752570988 +0000 UTC m=+0.081177152 container health_status d96827cd9c29e53bbdf4cef10942608e4ba405294733072b4aa624c0238e2ed8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Jan 23 11:54:59 compute-0 podman[201022]: time="2026-01-23T11:54:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 23 11:54:59 compute-0 podman[201022]: @ - - [23/Jan/2026:11:54:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28508 "" "Go-http-client/1.1"
Jan 23 11:54:59 compute-0 podman[201022]: @ - - [23/Jan/2026:11:54:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4373 "" "Go-http-client/1.1"
Jan 23 11:55:00 compute-0 nova_compute[185173]: 2026-01-23 11:55:00.433 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:55:00 compute-0 podman[241371]: 2026-01-23 11:55:00.753946128 +0000 UTC m=+0.087954155 container health_status 1cc877fed4914980324cf4c0d6ba23743fd113442cee4d49cc1a59e402757170 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 23 11:55:01 compute-0 openstack_network_exporter[204160]: ERROR   11:55:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 23 11:55:01 compute-0 openstack_network_exporter[204160]: 
Jan 23 11:55:01 compute-0 openstack_network_exporter[204160]: ERROR   11:55:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 23 11:55:01 compute-0 openstack_network_exporter[204160]: 
Jan 23 11:55:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:01.453 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 23 11:55:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:01.453 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 23 11:55:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:01.453 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283bb610a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:55:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:01.455 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f28410bc7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:55:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:01.455 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be810>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283bb610a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:55:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:01.455 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be840>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283bb610a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:55:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:01.456 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283bb610a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:55:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:01.456 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be8a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283bb610a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:55:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:01.456 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc8f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283bb610a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:55:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:01.456 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be900>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283bb610a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:55:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:01.457 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bf140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283bb610a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:55:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:01.457 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be960>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283bb610a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:55:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:01.457 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2842f61190>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283bb610a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:55:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:01.458 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28411c9190>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283bb610a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:55:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:01.458 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be9c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283bb610a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:55:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:01.458 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bf1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283bb610a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:55:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:01.459 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bec00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283bb610a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:55:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:01.459 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bf440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283bb610a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:55:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:01.459 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bec60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283bb610a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:55:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:01.460 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2842f83560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283bb610a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:55:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:01.460 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283bb610a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:55:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:01.460 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc5c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283bb610a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:55:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:01.461 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283bb610a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:55:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:01.461 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be660>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283bb610a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:55:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:01.461 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '55846fbf-a87a-4cba-be0b-23125d3d9ef4', 'name': 'test_0', 'flavor': {'id': 'f2c5c5dd-a580-4885-a3ab-a766eac401c8', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'c5833e41-b4db-454e-8f49-014aa18c7dc5'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'bd16a0de2f5e4a8480a855ef0e1a3f14', 'user_id': 'd9858533c2284846a8f0f19a1fb45045', 'hostId': '47f89b8956aaa9163f724166aabd4216eadbb2bd951d24f4c87e1ecb', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 23 11:55:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:01.462 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283bb610a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:55:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:01.463 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283bb610a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:55:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:01.463 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2842f1af60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283bb610a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:55:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:01.463 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283bb610a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:55:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:01.463 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be7b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283bb610a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:55:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:01.465 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '84b3f69a-6ab7-406d-939b-a485518755a5', 'name': 'vn-i4gqh4k-vr2au76lt4jq-fptc6vwdy3ol-vnf-bciscawcuiyk', 'flavor': {'id': 'f2c5c5dd-a580-4885-a3ab-a766eac401c8', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'c5833e41-b4db-454e-8f49-014aa18c7dc5'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'bd16a0de2f5e4a8480a855ef0e1a3f14', 'user_id': 'd9858533c2284846a8f0f19a1fb45045', 'hostId': '47f89b8956aaa9163f724166aabd4216eadbb2bd951d24f4c87e1ecb', 'status': 'active', 'metadata': {'metering.server_group': '500baa09-1e39-474e-b275-8b2dffe3a65b'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 23 11:55:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:01.467 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Jan 23 11:55:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:01.468 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}ad70b57d9194f6532b182b578b16289681d355eb6a1afd27a70859dd1387cbc9" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Jan 23 11:55:01 compute-0 nova_compute[185173]: 2026-01-23 11:55:01.625 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.165 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1959 Content-Type: application/json Date: Fri, 23 Jan 2026 11:55:01 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-647ad3f5-edad-41a7-9438-69649809e56a x-openstack-request-id: req-647ad3f5-edad-41a7-9438-69649809e56a _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.166 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e", "name": "vn-i4gqh4k-nwnahxa6hq2y-lqyj7kfebyqq-vnf-dcwk4osqlplv", "status": "ACTIVE", "tenant_id": "bd16a0de2f5e4a8480a855ef0e1a3f14", "user_id": "d9858533c2284846a8f0f19a1fb45045", "metadata": {"metering.server_group": "500baa09-1e39-474e-b275-8b2dffe3a65b"}, "hostId": "47f89b8956aaa9163f724166aabd4216eadbb2bd951d24f4c87e1ecb", "image": {"id": "c5833e41-b4db-454e-8f49-014aa18c7dc5", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/c5833e41-b4db-454e-8f49-014aa18c7dc5"}]}, "flavor": {"id": "f2c5c5dd-a580-4885-a3ab-a766eac401c8", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/f2c5c5dd-a580-4885-a3ab-a766eac401c8"}]}, "created": "2026-01-23T11:54:14Z", "updated": "2026-01-23T11:54:21Z", "addresses": {"private": [{"version": 4, "addr": "192.168.0.99", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:fa:bc:bc"}, {"version": 4, "addr": "192.168.122.192", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:fa:bc:bc"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2026-01-23T11:54:21.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "basic"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000003", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.166 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e used request id req-647ad3f5-edad-41a7-9438-69649809e56a request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.168 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e', 'name': 'vn-i4gqh4k-nwnahxa6hq2y-lqyj7kfebyqq-vnf-dcwk4osqlplv', 'flavor': {'id': 'f2c5c5dd-a580-4885-a3ab-a766eac401c8', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'c5833e41-b4db-454e-8f49-014aa18c7dc5'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000003', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'bd16a0de2f5e4a8480a855ef0e1a3f14', 'user_id': 'd9858533c2284846a8f0f19a1fb45045', 'hostId': '47f89b8956aaa9163f724166aabd4216eadbb2bd951d24f4c87e1ecb', 'status': 'active', 'metadata': {'metering.server_group': '500baa09-1e39-474e-b275-8b2dffe3a65b'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.169 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.169 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bc800>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.170 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bc800>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.170 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.171 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-01-23T11:55:02.170173) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.177 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.183 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.189 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e / tapb9b63bb2-5f inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.189 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.190 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.191 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f28410be7e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.191 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.191 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410be810>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.192 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410be810>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.192 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.193 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-01-23T11:55:02.192510) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.218 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.219 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.219 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.241 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.usage volume: 21364736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.242 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.242 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.263 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.264 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.265 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.334 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.334 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f28411c9b80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.334 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.334 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410be840>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.334 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410be840>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.335 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.335 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-01-23T11:55:02.334985) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.404 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.405 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.405 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.464 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.write.bytes volume: 41836544 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.465 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.465 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.541 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.device.write.bytes volume: 41697280 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.542 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.542 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.543 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.543 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f28410bc830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.543 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.543 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bc860>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.543 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bc860>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.543 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.543 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.544 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2026-01-23T11:55:02.543533) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.543 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: vn-i4gqh4k-nwnahxa6hq2y-lqyj7kfebyqq-vnf-dcwk4osqlplv>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-i4gqh4k-nwnahxa6hq2y-lqyj7kfebyqq-vnf-dcwk4osqlplv>]
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.544 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f28410be870>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.544 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.544 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410be8a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.544 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410be8a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.544 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.545 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.write.latency volume: 1669208630 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.545 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.write.latency volume: 8106790 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.545 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.545 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.write.latency volume: 801641355 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.545 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.write.latency volume: 8862519 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.546 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-01-23T11:55:02.544875) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.546 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.546 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.device.write.latency volume: 1823499699 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.546 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.device.write.latency volume: 8667328 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.547 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.547 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.547 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f28410bc8c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.547 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.547 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bc8f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.548 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bc8f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.548 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.548 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-01-23T11:55:02.548075) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.548 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.548 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.549 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.549 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.549 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f28410be8d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.549 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.549 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410be900>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.549 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410be900>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.549 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.549 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.write.requests volume: 234 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.550 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.550 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-01-23T11:55:02.549890) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.550 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.550 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.write.requests volume: 239 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.551 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.551 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.551 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.device.write.requests volume: 222 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.551 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.552 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.552 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.552 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f28410bef30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.552 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.552 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bf140>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.552 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bf140>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.553 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.553 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.553 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.553 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-01-23T11:55:02.553002) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.553 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.554 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.554 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f28410be930>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.554 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.554 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410be960>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.554 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410be960>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.554 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.555 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.555 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-01-23T11:55:02.554662) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.555 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f28410be750>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.555 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.555 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f2842f61190>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.555 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f2842f61190>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.555 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.556 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.read.latency volume: 639933059 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.556 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.read.latency volume: 72530295 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.556 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-01-23T11:55:02.555963) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.556 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.read.latency volume: 43879093 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.557 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.read.latency volume: 363540160 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.557 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.read.latency volume: 61167194 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.557 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.read.latency volume: 48392812 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.557 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.device.read.latency volume: 374273377 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.558 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.device.read.latency volume: 71332104 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.558 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.device.read.latency volume: 53834488 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.558 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.558 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f28411a4c50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.558 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.558 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28411c9190>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.559 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28411c9190>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.559 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.559 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.559 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-01-23T11:55:02.559104) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.559 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.560 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.560 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.560 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.560 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.560 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.561 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.561 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.561 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.561 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f28410be990>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.561 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.562 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410be9c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.562 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410be9c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.562 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.562 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-01-23T11:55:02.562183) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.562 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.563 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f28410bf1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.563 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.563 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bf1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.563 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bf1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.563 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.563 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-01-23T11:55:02.563345) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.563 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.563 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.564 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.564 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.564 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f28410bebd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.564 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.564 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bec00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.564 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bec00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.564 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.565 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-01-23T11:55:02.564899) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.591 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/memory.usage volume: 48.79296875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.613 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/memory.usage volume: 49.21484375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.641 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/memory.usage volume: 49.69140625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.642 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.642 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f28410bf410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.642 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.643 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bf440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.643 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bf440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.643 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.643 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.644 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-01-23T11:55:02.643373) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.644 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.645 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.645 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.645 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f28410bec30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.645 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.645 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bec60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.646 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bec60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.646 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.646 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/network.incoming.bytes volume: 2052 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.646 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/network.incoming.bytes volume: 4975 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.647 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/network.incoming.bytes volume: 1486 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.647 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-01-23T11:55:02.646134) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.648 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.648 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f28410bcfb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.648 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.648 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f2842f83560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.648 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f2842f83560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.648 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.649 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-01-23T11:55:02.648794) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.649 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/cpu volume: 37590000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.649 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/cpu volume: 317890000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.650 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/cpu volume: 32250000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.650 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.650 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f28410bc920>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.650 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.651 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bc590>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.651 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bc590>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.651 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.651 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.651 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: vn-i4gqh4k-nwnahxa6hq2y-lqyj7kfebyqq-vnf-dcwk4osqlplv>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-i4gqh4k-nwnahxa6hq2y-lqyj7kfebyqq-vnf-dcwk4osqlplv>]
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.652 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f28410bc5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.652 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.653 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2026-01-23T11:55:02.651516) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.652 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bc5c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.653 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bc5c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.653 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.654 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-01-23T11:55:02.653808) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.654 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/network.incoming.packets volume: 19 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.654 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/network.incoming.packets volume: 34 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.655 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/network.incoming.packets volume: 12 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.655 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.656 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f28410bc890>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.656 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.656 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bc650>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.656 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bc650>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.656 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.656 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.657 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-01-23T11:55:02.656623) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.657 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.658 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.658 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.658 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f28410be720>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.658 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.659 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410be660>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.659 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410be660>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.659 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.659 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.659 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-01-23T11:55:02.659233) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.660 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.660 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.661 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.661 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.661 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.661 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.662 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.662 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.663 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.663 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f28410bc6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.663 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.663 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bc680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.664 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bc680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.664 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.664 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-01-23T11:55:02.664157) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.664 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.665 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/network.outgoing.packets volume: 45 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.665 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/network.outgoing.packets volume: 13 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.666 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.666 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f28410bec90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.666 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.666 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bc6e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.667 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bc6e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.667 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.667 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.668 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.668 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.669 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.669 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-01-23T11:55:02.667224) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.670 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f284322b260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.670 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.670 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f2842f1af60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.670 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f2842f1af60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.670 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.670 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-01-23T11:55:02.670550) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.670 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.671 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.671 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.672 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.672 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.672 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.673 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.673 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.673 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.674 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.674 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f28410bc740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.675 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.675 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bc770>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.675 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bc770>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.675 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.675 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/network.outgoing.bytes volume: 2272 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.676 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/network.outgoing.bytes volume: 5004 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.676 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/network.outgoing.bytes volume: 1666 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.677 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.677 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-01-23T11:55:02.675662) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.677 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f28410be780>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.678 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.678 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410be7b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.678 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410be7b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.678 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.678 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.679 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.679 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-01-23T11:55:02.678489) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.679 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.680 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.680 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.681 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.681 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.681 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.681 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.682 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.682 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.682 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.682 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.682 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.683 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.683 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.683 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.683 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.683 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.683 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.683 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.683 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.683 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.683 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.683 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.683 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.683 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.684 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.684 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.684 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.684 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.684 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.684 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.684 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.684 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:55:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:55:02.684 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:55:05 compute-0 nova_compute[185173]: 2026-01-23 11:55:05.437 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:55:06 compute-0 nova_compute[185173]: 2026-01-23 11:55:06.627 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:55:07 compute-0 podman[241397]: 2026-01-23 11:55:07.791941552 +0000 UTC m=+0.107081825 container health_status adf529ba1b6aae11f18bcfacdd7f5850af0b6e6af2250d4a705be9c346f3f5af (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ceilometer_agent_ipmi, managed_by=edpm_ansible)
Jan 23 11:55:09 compute-0 podman[241417]: 2026-01-23 11:55:09.779026028 +0000 UTC m=+0.097693549 container health_status 900ef841977ab427bb05b895d10e0cac749b9185cccc7bb7aaf2b3886aa6449a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., vcs-type=git, version=9.4, config_id=kepler, container_name=kepler, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=base rhel9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64)
Jan 23 11:55:10 compute-0 nova_compute[185173]: 2026-01-23 11:55:10.442 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:55:11 compute-0 nova_compute[185173]: 2026-01-23 11:55:11.633 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:55:12 compute-0 nova_compute[185173]: 2026-01-23 11:55:12.235 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:55:13 compute-0 nova_compute[185173]: 2026-01-23 11:55:13.234 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:55:13 compute-0 nova_compute[185173]: 2026-01-23 11:55:13.259 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 11:55:13 compute-0 nova_compute[185173]: 2026-01-23 11:55:13.260 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 11:55:13 compute-0 nova_compute[185173]: 2026-01-23 11:55:13.260 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 11:55:13 compute-0 nova_compute[185173]: 2026-01-23 11:55:13.261 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 23 11:55:13 compute-0 nova_compute[185173]: 2026-01-23 11:55:13.369 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:55:13 compute-0 podman[241437]: 2026-01-23 11:55:13.388954295 +0000 UTC m=+0.078318374 container health_status 99ee297e6e25b500e7af118e58bbafc761d2fd7202cdfcf4c976c2a99866b5ef (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 23 11:55:13 compute-0 nova_compute[185173]: 2026-01-23 11:55:13.457 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:55:13 compute-0 nova_compute[185173]: 2026-01-23 11:55:13.458 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:55:13 compute-0 nova_compute[185173]: 2026-01-23 11:55:13.515 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:55:13 compute-0 nova_compute[185173]: 2026-01-23 11:55:13.516 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:55:13 compute-0 nova_compute[185173]: 2026-01-23 11:55:13.574 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.eph0 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:55:13 compute-0 nova_compute[185173]: 2026-01-23 11:55:13.575 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:55:13 compute-0 nova_compute[185173]: 2026-01-23 11:55:13.635 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.eph0 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:55:13 compute-0 nova_compute[185173]: 2026-01-23 11:55:13.646 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/84b3f69a-6ab7-406d-939b-a485518755a5/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:55:13 compute-0 nova_compute[185173]: 2026-01-23 11:55:13.706 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/84b3f69a-6ab7-406d-939b-a485518755a5/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:55:13 compute-0 nova_compute[185173]: 2026-01-23 11:55:13.708 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/84b3f69a-6ab7-406d-939b-a485518755a5/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:55:13 compute-0 nova_compute[185173]: 2026-01-23 11:55:13.762 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/84b3f69a-6ab7-406d-939b-a485518755a5/disk --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:55:13 compute-0 nova_compute[185173]: 2026-01-23 11:55:13.763 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/84b3f69a-6ab7-406d-939b-a485518755a5/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:55:13 compute-0 nova_compute[185173]: 2026-01-23 11:55:13.819 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/84b3f69a-6ab7-406d-939b-a485518755a5/disk.eph0 --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:55:13 compute-0 nova_compute[185173]: 2026-01-23 11:55:13.821 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/84b3f69a-6ab7-406d-939b-a485518755a5/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:55:13 compute-0 nova_compute[185173]: 2026-01-23 11:55:13.908 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/84b3f69a-6ab7-406d-939b-a485518755a5/disk.eph0 --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:55:13 compute-0 nova_compute[185173]: 2026-01-23 11:55:13.915 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:55:13 compute-0 nova_compute[185173]: 2026-01-23 11:55:13.997 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:55:13 compute-0 nova_compute[185173]: 2026-01-23 11:55:13.998 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:55:14 compute-0 nova_compute[185173]: 2026-01-23 11:55:14.053 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:55:14 compute-0 nova_compute[185173]: 2026-01-23 11:55:14.054 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:55:14 compute-0 nova_compute[185173]: 2026-01-23 11:55:14.118 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.eph0 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:55:14 compute-0 nova_compute[185173]: 2026-01-23 11:55:14.119 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:55:14 compute-0 nova_compute[185173]: 2026-01-23 11:55:14.181 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.eph0 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:55:14 compute-0 nova_compute[185173]: 2026-01-23 11:55:14.490 185177 WARNING nova.virt.libvirt.driver [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 23 11:55:14 compute-0 nova_compute[185173]: 2026-01-23 11:55:14.492 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4720MB free_disk=72.3799819946289GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 23 11:55:14 compute-0 nova_compute[185173]: 2026-01-23 11:55:14.493 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 11:55:14 compute-0 nova_compute[185173]: 2026-01-23 11:55:14.493 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 11:55:14 compute-0 nova_compute[185173]: 2026-01-23 11:55:14.964 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Instance 55846fbf-a87a-4cba-be0b-23125d3d9ef4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 23 11:55:14 compute-0 nova_compute[185173]: 2026-01-23 11:55:14.965 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Instance 84b3f69a-6ab7-406d-939b-a485518755a5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 23 11:55:14 compute-0 nova_compute[185173]: 2026-01-23 11:55:14.966 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Instance ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 23 11:55:14 compute-0 nova_compute[185173]: 2026-01-23 11:55:14.967 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 23 11:55:14 compute-0 nova_compute[185173]: 2026-01-23 11:55:14.968 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2048MB phys_disk=79GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 23 11:55:15 compute-0 nova_compute[185173]: 2026-01-23 11:55:15.057 185177 DEBUG nova.compute.provider_tree [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Inventory has not changed in ProviderTree for provider: 77dd020c-2f5c-40b0-b660-8a95a28aabbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 23 11:55:15 compute-0 nova_compute[185173]: 2026-01-23 11:55:15.074 185177 DEBUG nova.scheduler.client.report [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Inventory has not changed for provider 77dd020c-2f5c-40b0-b660-8a95a28aabbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 23 11:55:15 compute-0 nova_compute[185173]: 2026-01-23 11:55:15.095 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 23 11:55:15 compute-0 nova_compute[185173]: 2026-01-23 11:55:15.096 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.603s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 11:55:15 compute-0 nova_compute[185173]: 2026-01-23 11:55:15.446 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:55:16 compute-0 nova_compute[185173]: 2026-01-23 11:55:16.097 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:55:16 compute-0 nova_compute[185173]: 2026-01-23 11:55:16.098 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:55:16 compute-0 nova_compute[185173]: 2026-01-23 11:55:16.099 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:55:16 compute-0 nova_compute[185173]: 2026-01-23 11:55:16.099 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 23 11:55:16 compute-0 nova_compute[185173]: 2026-01-23 11:55:16.235 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:55:16 compute-0 nova_compute[185173]: 2026-01-23 11:55:16.236 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 23 11:55:16 compute-0 nova_compute[185173]: 2026-01-23 11:55:16.236 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 23 11:55:16 compute-0 nova_compute[185173]: 2026-01-23 11:55:16.636 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:55:16 compute-0 nova_compute[185173]: 2026-01-23 11:55:16.914 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Acquiring lock "refresh_cache-55846fbf-a87a-4cba-be0b-23125d3d9ef4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 23 11:55:16 compute-0 nova_compute[185173]: 2026-01-23 11:55:16.915 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Acquired lock "refresh_cache-55846fbf-a87a-4cba-be0b-23125d3d9ef4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 23 11:55:16 compute-0 nova_compute[185173]: 2026-01-23 11:55:16.915 185177 DEBUG nova.network.neutron [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] [instance: 55846fbf-a87a-4cba-be0b-23125d3d9ef4] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 23 11:55:16 compute-0 nova_compute[185173]: 2026-01-23 11:55:16.916 185177 DEBUG nova.objects.instance [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 55846fbf-a87a-4cba-be0b-23125d3d9ef4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 23 11:55:18 compute-0 podman[241496]: 2026-01-23 11:55:18.726068342 +0000 UTC m=+0.062145494 container health_status cde20f10ae383cce1365a41265bac0a75ea71c31a21a1539f187bef9d678e8d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, version=9.6, architecture=x86_64, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, config_id=openstack_network_exporter, name=ubi9-minimal, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers)
Jan 23 11:55:19 compute-0 nova_compute[185173]: 2026-01-23 11:55:19.943 185177 DEBUG nova.network.neutron [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] [instance: 55846fbf-a87a-4cba-be0b-23125d3d9ef4] Updating instance_info_cache with network_info: [{"id": "4c18896b-ecf0-4d1b-b901-f24edce45c11", "address": "fa:16:3e:e4:21:a1", "network": {"id": "9d2c33ef-0f52-43b5-80dd-899657aece53", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.65", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bd16a0de2f5e4a8480a855ef0e1a3f14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c18896b-ec", "ovs_interfaceid": "4c18896b-ecf0-4d1b-b901-f24edce45c11", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 23 11:55:19 compute-0 nova_compute[185173]: 2026-01-23 11:55:19.972 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Releasing lock "refresh_cache-55846fbf-a87a-4cba-be0b-23125d3d9ef4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 23 11:55:19 compute-0 nova_compute[185173]: 2026-01-23 11:55:19.973 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] [instance: 55846fbf-a87a-4cba-be0b-23125d3d9ef4] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 23 11:55:19 compute-0 nova_compute[185173]: 2026-01-23 11:55:19.974 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:55:19 compute-0 nova_compute[185173]: 2026-01-23 11:55:19.974 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:55:20 compute-0 nova_compute[185173]: 2026-01-23 11:55:20.234 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:55:20 compute-0 nova_compute[185173]: 2026-01-23 11:55:20.452 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:55:21 compute-0 nova_compute[185173]: 2026-01-23 11:55:21.638 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:55:25 compute-0 nova_compute[185173]: 2026-01-23 11:55:25.457 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:55:26 compute-0 nova_compute[185173]: 2026-01-23 11:55:26.640 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:55:27 compute-0 podman[241520]: 2026-01-23 11:55:27.733060118 +0000 UTC m=+0.058660000 container health_status 48bfd3e93cfb033a8917f154ab637a84f3f60f7609564292c230ce848bae7693 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 23 11:55:27 compute-0 podman[241521]: 2026-01-23 11:55:27.754670868 +0000 UTC m=+0.072744349 container health_status 6ec039018dddd109dd56b3f3912ce4a80c166b5fb98c417c5e3cfbbdfbfbeaad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20260120, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 23 11:55:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:55:29.099 106832 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 11:55:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:55:29.099 106832 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 11:55:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:55:29.100 106832 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 11:55:29 compute-0 podman[201022]: time="2026-01-23T11:55:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 23 11:55:29 compute-0 podman[241560]: 2026-01-23 11:55:29.749160322 +0000 UTC m=+0.081612982 container health_status d96827cd9c29e53bbdf4cef10942608e4ba405294733072b4aa624c0238e2ed8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, tcib_managed=true)
Jan 23 11:55:29 compute-0 podman[201022]: @ - - [23/Jan/2026:11:55:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28508 "" "Go-http-client/1.1"
Jan 23 11:55:29 compute-0 podman[201022]: @ - - [23/Jan/2026:11:55:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4382 "" "Go-http-client/1.1"
Jan 23 11:55:30 compute-0 nova_compute[185173]: 2026-01-23 11:55:30.461 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:55:31 compute-0 openstack_network_exporter[204160]: ERROR   11:55:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 23 11:55:31 compute-0 openstack_network_exporter[204160]: 
Jan 23 11:55:31 compute-0 openstack_network_exporter[204160]: ERROR   11:55:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 23 11:55:31 compute-0 openstack_network_exporter[204160]: 
Jan 23 11:55:31 compute-0 nova_compute[185173]: 2026-01-23 11:55:31.646 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:55:31 compute-0 podman[241577]: 2026-01-23 11:55:31.770330267 +0000 UTC m=+0.093566100 container health_status 1cc877fed4914980324cf4c0d6ba23743fd113442cee4d49cc1a59e402757170 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 23 11:55:35 compute-0 nova_compute[185173]: 2026-01-23 11:55:35.466 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:55:36 compute-0 nova_compute[185173]: 2026-01-23 11:55:36.645 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:55:38 compute-0 podman[241602]: 2026-01-23 11:55:38.777073751 +0000 UTC m=+0.097302300 container health_status adf529ba1b6aae11f18bcfacdd7f5850af0b6e6af2250d4a705be9c346f3f5af (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 23 11:55:40 compute-0 nova_compute[185173]: 2026-01-23 11:55:40.471 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:55:40 compute-0 podman[241620]: 2026-01-23 11:55:40.745176161 +0000 UTC m=+0.077219887 container health_status 900ef841977ab427bb05b895d10e0cac749b9185cccc7bb7aaf2b3886aa6449a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., config_id=kepler, version=9.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release-0.7.12=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, vendor=Red Hat, Inc., container_name=kepler, managed_by=edpm_ansible, name=ubi9, io.openshift.expose-services=, vcs-type=git, com.redhat.component=ubi9-container)
Jan 23 11:55:41 compute-0 nova_compute[185173]: 2026-01-23 11:55:41.647 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:55:43 compute-0 podman[241639]: 2026-01-23 11:55:43.722014924 +0000 UTC m=+0.058971639 container health_status 99ee297e6e25b500e7af118e58bbafc761d2fd7202cdfcf4c976c2a99866b5ef (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 23 11:55:45 compute-0 nova_compute[185173]: 2026-01-23 11:55:45.476 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:55:46 compute-0 nova_compute[185173]: 2026-01-23 11:55:46.648 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:55:49 compute-0 podman[241664]: 2026-01-23 11:55:49.758338905 +0000 UTC m=+0.094544504 container health_status cde20f10ae383cce1365a41265bac0a75ea71c31a21a1539f187bef9d678e8d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., vcs-type=git, vendor=Red Hat, Inc., version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.buildah.version=1.33.7, release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=openstack_network_exporter, name=ubi9-minimal, io.openshift.tags=minimal rhel9)
Jan 23 11:55:50 compute-0 nova_compute[185173]: 2026-01-23 11:55:50.480 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:55:51 compute-0 nova_compute[185173]: 2026-01-23 11:55:51.650 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:55:55 compute-0 nova_compute[185173]: 2026-01-23 11:55:55.485 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:55:56 compute-0 nova_compute[185173]: 2026-01-23 11:55:56.657 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:55:58 compute-0 podman[241685]: 2026-01-23 11:55:58.752098923 +0000 UTC m=+0.075150557 container health_status 48bfd3e93cfb033a8917f154ab637a84f3f60f7609564292c230ce848bae7693 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 23 11:55:58 compute-0 podman[241686]: 2026-01-23 11:55:58.790827634 +0000 UTC m=+0.108620931 container health_status 6ec039018dddd109dd56b3f3912ce4a80c166b5fb98c417c5e3cfbbdfbfbeaad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260120, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Jan 23 11:55:59 compute-0 podman[201022]: time="2026-01-23T11:55:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 23 11:55:59 compute-0 podman[201022]: @ - - [23/Jan/2026:11:55:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28508 "" "Go-http-client/1.1"
Jan 23 11:55:59 compute-0 podman[201022]: @ - - [23/Jan/2026:11:55:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4375 "" "Go-http-client/1.1"
Jan 23 11:56:00 compute-0 nova_compute[185173]: 2026-01-23 11:56:00.490 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:56:00 compute-0 podman[241726]: 2026-01-23 11:56:00.753836921 +0000 UTC m=+0.090621039 container health_status d96827cd9c29e53bbdf4cef10942608e4ba405294733072b4aa624c0238e2ed8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 23 11:56:01 compute-0 openstack_network_exporter[204160]: ERROR   11:56:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 23 11:56:01 compute-0 openstack_network_exporter[204160]: 
Jan 23 11:56:01 compute-0 openstack_network_exporter[204160]: ERROR   11:56:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 23 11:56:01 compute-0 openstack_network_exporter[204160]: 
Jan 23 11:56:01 compute-0 nova_compute[185173]: 2026-01-23 11:56:01.661 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:56:02 compute-0 podman[241745]: 2026-01-23 11:56:02.796987925 +0000 UTC m=+0.127778122 container health_status 1cc877fed4914980324cf4c0d6ba23743fd113442cee4d49cc1a59e402757170 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_id=ovn_controller)
Jan 23 11:56:05 compute-0 nova_compute[185173]: 2026-01-23 11:56:05.495 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:56:06 compute-0 nova_compute[185173]: 2026-01-23 11:56:06.664 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:56:09 compute-0 sshd-session[241770]: Invalid user sol from 45.148.10.240 port 51808
Jan 23 11:56:09 compute-0 podman[241772]: 2026-01-23 11:56:09.474730062 +0000 UTC m=+0.072942504 container health_status adf529ba1b6aae11f18bcfacdd7f5850af0b6e6af2250d4a705be9c346f3f5af (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, container_name=ceilometer_agent_ipmi, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3)
Jan 23 11:56:09 compute-0 sshd-session[241770]: Connection closed by invalid user sol 45.148.10.240 port 51808 [preauth]
Jan 23 11:56:10 compute-0 nova_compute[185173]: 2026-01-23 11:56:10.498 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:56:11 compute-0 nova_compute[185173]: 2026-01-23 11:56:11.667 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:56:11 compute-0 podman[241793]: 2026-01-23 11:56:11.748979049 +0000 UTC m=+0.073339354 container health_status 900ef841977ab427bb05b895d10e0cac749b9185cccc7bb7aaf2b3886aa6449a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release=1214.1726694543, distribution-scope=public, com.redhat.component=ubi9-container, vendor=Red Hat, Inc., config_id=kepler, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, release-0.7.12=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., name=ubi9, container_name=kepler, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, architecture=x86_64, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=)
Jan 23 11:56:12 compute-0 nova_compute[185173]: 2026-01-23 11:56:12.235 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:56:14 compute-0 nova_compute[185173]: 2026-01-23 11:56:14.234 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:56:14 compute-0 nova_compute[185173]: 2026-01-23 11:56:14.312 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 11:56:14 compute-0 nova_compute[185173]: 2026-01-23 11:56:14.312 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 11:56:14 compute-0 nova_compute[185173]: 2026-01-23 11:56:14.313 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 11:56:14 compute-0 nova_compute[185173]: 2026-01-23 11:56:14.313 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 23 11:56:14 compute-0 nova_compute[185173]: 2026-01-23 11:56:14.410 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:56:14 compute-0 nova_compute[185173]: 2026-01-23 11:56:14.471 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:56:14 compute-0 nova_compute[185173]: 2026-01-23 11:56:14.472 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:56:14 compute-0 nova_compute[185173]: 2026-01-23 11:56:14.529 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:56:14 compute-0 nova_compute[185173]: 2026-01-23 11:56:14.531 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:56:14 compute-0 nova_compute[185173]: 2026-01-23 11:56:14.591 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.eph0 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:56:14 compute-0 nova_compute[185173]: 2026-01-23 11:56:14.593 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:56:14 compute-0 nova_compute[185173]: 2026-01-23 11:56:14.663 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.eph0 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:56:14 compute-0 nova_compute[185173]: 2026-01-23 11:56:14.675 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/84b3f69a-6ab7-406d-939b-a485518755a5/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:56:14 compute-0 podman[241823]: 2026-01-23 11:56:14.724972641 +0000 UTC m=+0.061555540 container health_status 99ee297e6e25b500e7af118e58bbafc761d2fd7202cdfcf4c976c2a99866b5ef (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 23 11:56:14 compute-0 nova_compute[185173]: 2026-01-23 11:56:14.741 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/84b3f69a-6ab7-406d-939b-a485518755a5/disk --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:56:14 compute-0 nova_compute[185173]: 2026-01-23 11:56:14.743 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/84b3f69a-6ab7-406d-939b-a485518755a5/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:56:14 compute-0 nova_compute[185173]: 2026-01-23 11:56:14.801 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/84b3f69a-6ab7-406d-939b-a485518755a5/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:56:14 compute-0 nova_compute[185173]: 2026-01-23 11:56:14.803 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/84b3f69a-6ab7-406d-939b-a485518755a5/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:56:14 compute-0 nova_compute[185173]: 2026-01-23 11:56:14.869 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/84b3f69a-6ab7-406d-939b-a485518755a5/disk.eph0 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:56:14 compute-0 nova_compute[185173]: 2026-01-23 11:56:14.870 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/84b3f69a-6ab7-406d-939b-a485518755a5/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:56:14 compute-0 nova_compute[185173]: 2026-01-23 11:56:14.929 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/84b3f69a-6ab7-406d-939b-a485518755a5/disk.eph0 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:56:14 compute-0 nova_compute[185173]: 2026-01-23 11:56:14.938 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:56:14 compute-0 nova_compute[185173]: 2026-01-23 11:56:14.998 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:56:15 compute-0 nova_compute[185173]: 2026-01-23 11:56:15.000 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:56:15 compute-0 nova_compute[185173]: 2026-01-23 11:56:15.061 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:56:15 compute-0 nova_compute[185173]: 2026-01-23 11:56:15.062 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:56:15 compute-0 nova_compute[185173]: 2026-01-23 11:56:15.120 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.eph0 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:56:15 compute-0 nova_compute[185173]: 2026-01-23 11:56:15.121 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:56:15 compute-0 nova_compute[185173]: 2026-01-23 11:56:15.176 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.eph0 --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:56:15 compute-0 nova_compute[185173]: 2026-01-23 11:56:15.475 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:56:15 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:56:15.474 106832 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:21:44', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '86:2e:09:c4:2a:53'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 23 11:56:15 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:56:15.476 106832 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 23 11:56:15 compute-0 nova_compute[185173]: 2026-01-23 11:56:15.493 185177 WARNING nova.virt.libvirt.driver [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 23 11:56:15 compute-0 nova_compute[185173]: 2026-01-23 11:56:15.494 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4723MB free_disk=72.3799819946289GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 23 11:56:15 compute-0 nova_compute[185173]: 2026-01-23 11:56:15.494 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 11:56:15 compute-0 nova_compute[185173]: 2026-01-23 11:56:15.495 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 11:56:15 compute-0 nova_compute[185173]: 2026-01-23 11:56:15.502 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:56:15 compute-0 nova_compute[185173]: 2026-01-23 11:56:15.722 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Instance 55846fbf-a87a-4cba-be0b-23125d3d9ef4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 23 11:56:15 compute-0 nova_compute[185173]: 2026-01-23 11:56:15.723 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Instance 84b3f69a-6ab7-406d-939b-a485518755a5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 23 11:56:15 compute-0 nova_compute[185173]: 2026-01-23 11:56:15.723 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Instance ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 23 11:56:15 compute-0 nova_compute[185173]: 2026-01-23 11:56:15.723 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 23 11:56:15 compute-0 nova_compute[185173]: 2026-01-23 11:56:15.723 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2048MB phys_disk=79GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 23 11:56:15 compute-0 nova_compute[185173]: 2026-01-23 11:56:15.831 185177 DEBUG nova.compute.provider_tree [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Inventory has not changed in ProviderTree for provider: 77dd020c-2f5c-40b0-b660-8a95a28aabbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 23 11:56:15 compute-0 nova_compute[185173]: 2026-01-23 11:56:15.866 185177 DEBUG nova.scheduler.client.report [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Inventory has not changed for provider 77dd020c-2f5c-40b0-b660-8a95a28aabbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 23 11:56:15 compute-0 nova_compute[185173]: 2026-01-23 11:56:15.867 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 23 11:56:15 compute-0 nova_compute[185173]: 2026-01-23 11:56:15.868 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.373s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 11:56:16 compute-0 nova_compute[185173]: 2026-01-23 11:56:16.669 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:56:16 compute-0 nova_compute[185173]: 2026-01-23 11:56:16.863 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:56:16 compute-0 nova_compute[185173]: 2026-01-23 11:56:16.890 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:56:16 compute-0 nova_compute[185173]: 2026-01-23 11:56:16.891 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:56:16 compute-0 nova_compute[185173]: 2026-01-23 11:56:16.891 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 23 11:56:17 compute-0 nova_compute[185173]: 2026-01-23 11:56:17.258 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:56:17 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:56:17.477 106832 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9a136bfd-345f-428f-a7f6-d55531120214, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 23 11:56:18 compute-0 nova_compute[185173]: 2026-01-23 11:56:18.234 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:56:18 compute-0 nova_compute[185173]: 2026-01-23 11:56:18.235 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 23 11:56:18 compute-0 nova_compute[185173]: 2026-01-23 11:56:18.419 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Acquiring lock "refresh_cache-84b3f69a-6ab7-406d-939b-a485518755a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 23 11:56:18 compute-0 nova_compute[185173]: 2026-01-23 11:56:18.419 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Acquired lock "refresh_cache-84b3f69a-6ab7-406d-939b-a485518755a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 23 11:56:18 compute-0 nova_compute[185173]: 2026-01-23 11:56:18.420 185177 DEBUG nova.network.neutron [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] [instance: 84b3f69a-6ab7-406d-939b-a485518755a5] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 23 11:56:20 compute-0 nova_compute[185173]: 2026-01-23 11:56:20.506 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:56:20 compute-0 podman[241874]: 2026-01-23 11:56:20.736732383 +0000 UTC m=+0.063206250 container health_status cde20f10ae383cce1365a41265bac0a75ea71c31a21a1539f187bef9d678e8d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, vendor=Red Hat, Inc., architecture=x86_64, config_id=openstack_network_exporter, vcs-type=git, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, release=1755695350, distribution-scope=public)
Jan 23 11:56:21 compute-0 nova_compute[185173]: 2026-01-23 11:56:21.672 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:56:23 compute-0 nova_compute[185173]: 2026-01-23 11:56:23.135 185177 DEBUG nova.network.neutron [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] [instance: 84b3f69a-6ab7-406d-939b-a485518755a5] Updating instance_info_cache with network_info: [{"id": "05dcc60f-5c09-47f3-9834-3594bf71b68e", "address": "fa:16:3e:40:4f:a6", "network": {"id": "9d2c33ef-0f52-43b5-80dd-899657aece53", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.62", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bd16a0de2f5e4a8480a855ef0e1a3f14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05dcc60f-5c", "ovs_interfaceid": "05dcc60f-5c09-47f3-9834-3594bf71b68e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 23 11:56:23 compute-0 nova_compute[185173]: 2026-01-23 11:56:23.192 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Releasing lock "refresh_cache-84b3f69a-6ab7-406d-939b-a485518755a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 23 11:56:23 compute-0 nova_compute[185173]: 2026-01-23 11:56:23.193 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] [instance: 84b3f69a-6ab7-406d-939b-a485518755a5] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 23 11:56:23 compute-0 nova_compute[185173]: 2026-01-23 11:56:23.193 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:56:23 compute-0 nova_compute[185173]: 2026-01-23 11:56:23.194 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:56:23 compute-0 nova_compute[185173]: 2026-01-23 11:56:23.194 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:56:23 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Jan 23 11:56:25 compute-0 nova_compute[185173]: 2026-01-23 11:56:25.511 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:56:25 compute-0 nova_compute[185173]: 2026-01-23 11:56:25.619 185177 DEBUG oslo_concurrency.lockutils [None req-4f818d91-3f9b-4d0b-a9b5-e151893dd14e d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Acquiring lock "e9de5be9-383e-4139-a192-9a00ac9030d0" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 11:56:25 compute-0 nova_compute[185173]: 2026-01-23 11:56:25.620 185177 DEBUG oslo_concurrency.lockutils [None req-4f818d91-3f9b-4d0b-a9b5-e151893dd14e d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Lock "e9de5be9-383e-4139-a192-9a00ac9030d0" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 11:56:25 compute-0 nova_compute[185173]: 2026-01-23 11:56:25.638 185177 DEBUG nova.compute.manager [None req-4f818d91-3f9b-4d0b-a9b5-e151893dd14e d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: e9de5be9-383e-4139-a192-9a00ac9030d0] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 23 11:56:25 compute-0 nova_compute[185173]: 2026-01-23 11:56:25.734 185177 DEBUG oslo_concurrency.lockutils [None req-4f818d91-3f9b-4d0b-a9b5-e151893dd14e d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 11:56:25 compute-0 nova_compute[185173]: 2026-01-23 11:56:25.734 185177 DEBUG oslo_concurrency.lockutils [None req-4f818d91-3f9b-4d0b-a9b5-e151893dd14e d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 11:56:25 compute-0 nova_compute[185173]: 2026-01-23 11:56:25.741 185177 DEBUG nova.virt.hardware [None req-4f818d91-3f9b-4d0b-a9b5-e151893dd14e d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 23 11:56:25 compute-0 nova_compute[185173]: 2026-01-23 11:56:25.742 185177 INFO nova.compute.claims [None req-4f818d91-3f9b-4d0b-a9b5-e151893dd14e d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: e9de5be9-383e-4139-a192-9a00ac9030d0] Claim successful on node compute-0.ctlplane.example.com
Jan 23 11:56:25 compute-0 nova_compute[185173]: 2026-01-23 11:56:25.920 185177 DEBUG nova.compute.provider_tree [None req-4f818d91-3f9b-4d0b-a9b5-e151893dd14e d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Inventory has not changed in ProviderTree for provider: 77dd020c-2f5c-40b0-b660-8a95a28aabbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 23 11:56:25 compute-0 nova_compute[185173]: 2026-01-23 11:56:25.933 185177 DEBUG nova.scheduler.client.report [None req-4f818d91-3f9b-4d0b-a9b5-e151893dd14e d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Inventory has not changed for provider 77dd020c-2f5c-40b0-b660-8a95a28aabbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 23 11:56:25 compute-0 nova_compute[185173]: 2026-01-23 11:56:25.953 185177 DEBUG oslo_concurrency.lockutils [None req-4f818d91-3f9b-4d0b-a9b5-e151893dd14e d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.218s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 11:56:25 compute-0 nova_compute[185173]: 2026-01-23 11:56:25.953 185177 DEBUG nova.compute.manager [None req-4f818d91-3f9b-4d0b-a9b5-e151893dd14e d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: e9de5be9-383e-4139-a192-9a00ac9030d0] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 23 11:56:25 compute-0 nova_compute[185173]: 2026-01-23 11:56:25.995 185177 DEBUG nova.compute.manager [None req-4f818d91-3f9b-4d0b-a9b5-e151893dd14e d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: e9de5be9-383e-4139-a192-9a00ac9030d0] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 23 11:56:25 compute-0 nova_compute[185173]: 2026-01-23 11:56:25.996 185177 DEBUG nova.network.neutron [None req-4f818d91-3f9b-4d0b-a9b5-e151893dd14e d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: e9de5be9-383e-4139-a192-9a00ac9030d0] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 23 11:56:26 compute-0 nova_compute[185173]: 2026-01-23 11:56:26.018 185177 INFO nova.virt.libvirt.driver [None req-4f818d91-3f9b-4d0b-a9b5-e151893dd14e d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: e9de5be9-383e-4139-a192-9a00ac9030d0] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 23 11:56:26 compute-0 nova_compute[185173]: 2026-01-23 11:56:26.049 185177 DEBUG nova.compute.manager [None req-4f818d91-3f9b-4d0b-a9b5-e151893dd14e d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: e9de5be9-383e-4139-a192-9a00ac9030d0] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 23 11:56:26 compute-0 nova_compute[185173]: 2026-01-23 11:56:26.144 185177 DEBUG nova.compute.manager [None req-4f818d91-3f9b-4d0b-a9b5-e151893dd14e d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: e9de5be9-383e-4139-a192-9a00ac9030d0] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 23 11:56:26 compute-0 nova_compute[185173]: 2026-01-23 11:56:26.146 185177 DEBUG nova.virt.libvirt.driver [None req-4f818d91-3f9b-4d0b-a9b5-e151893dd14e d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: e9de5be9-383e-4139-a192-9a00ac9030d0] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 23 11:56:26 compute-0 nova_compute[185173]: 2026-01-23 11:56:26.147 185177 INFO nova.virt.libvirt.driver [None req-4f818d91-3f9b-4d0b-a9b5-e151893dd14e d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: e9de5be9-383e-4139-a192-9a00ac9030d0] Creating image(s)
Jan 23 11:56:26 compute-0 nova_compute[185173]: 2026-01-23 11:56:26.147 185177 DEBUG oslo_concurrency.lockutils [None req-4f818d91-3f9b-4d0b-a9b5-e151893dd14e d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Acquiring lock "/var/lib/nova/instances/e9de5be9-383e-4139-a192-9a00ac9030d0/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 11:56:26 compute-0 nova_compute[185173]: 2026-01-23 11:56:26.148 185177 DEBUG oslo_concurrency.lockutils [None req-4f818d91-3f9b-4d0b-a9b5-e151893dd14e d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Lock "/var/lib/nova/instances/e9de5be9-383e-4139-a192-9a00ac9030d0/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 11:56:26 compute-0 nova_compute[185173]: 2026-01-23 11:56:26.149 185177 DEBUG oslo_concurrency.lockutils [None req-4f818d91-3f9b-4d0b-a9b5-e151893dd14e d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Lock "/var/lib/nova/instances/e9de5be9-383e-4139-a192-9a00ac9030d0/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 11:56:26 compute-0 nova_compute[185173]: 2026-01-23 11:56:26.161 185177 DEBUG oslo_concurrency.processutils [None req-4f818d91-3f9b-4d0b-a9b5-e151893dd14e d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/80c014b261205a8ef2db68f438805c389e810b13 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:56:26 compute-0 nova_compute[185173]: 2026-01-23 11:56:26.230 185177 DEBUG oslo_concurrency.processutils [None req-4f818d91-3f9b-4d0b-a9b5-e151893dd14e d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/80c014b261205a8ef2db68f438805c389e810b13 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:56:26 compute-0 nova_compute[185173]: 2026-01-23 11:56:26.232 185177 DEBUG oslo_concurrency.lockutils [None req-4f818d91-3f9b-4d0b-a9b5-e151893dd14e d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Acquiring lock "80c014b261205a8ef2db68f438805c389e810b13" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 11:56:26 compute-0 nova_compute[185173]: 2026-01-23 11:56:26.233 185177 DEBUG oslo_concurrency.lockutils [None req-4f818d91-3f9b-4d0b-a9b5-e151893dd14e d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Lock "80c014b261205a8ef2db68f438805c389e810b13" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 11:56:26 compute-0 nova_compute[185173]: 2026-01-23 11:56:26.249 185177 DEBUG oslo_concurrency.processutils [None req-4f818d91-3f9b-4d0b-a9b5-e151893dd14e d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/80c014b261205a8ef2db68f438805c389e810b13 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:56:26 compute-0 nova_compute[185173]: 2026-01-23 11:56:26.307 185177 DEBUG oslo_concurrency.processutils [None req-4f818d91-3f9b-4d0b-a9b5-e151893dd14e d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/80c014b261205a8ef2db68f438805c389e810b13 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:56:26 compute-0 nova_compute[185173]: 2026-01-23 11:56:26.308 185177 DEBUG oslo_concurrency.processutils [None req-4f818d91-3f9b-4d0b-a9b5-e151893dd14e d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/80c014b261205a8ef2db68f438805c389e810b13,backing_fmt=raw /var/lib/nova/instances/e9de5be9-383e-4139-a192-9a00ac9030d0/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:56:26 compute-0 nova_compute[185173]: 2026-01-23 11:56:26.349 185177 DEBUG oslo_concurrency.processutils [None req-4f818d91-3f9b-4d0b-a9b5-e151893dd14e d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/80c014b261205a8ef2db68f438805c389e810b13,backing_fmt=raw /var/lib/nova/instances/e9de5be9-383e-4139-a192-9a00ac9030d0/disk 1073741824" returned: 0 in 0.041s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:56:26 compute-0 nova_compute[185173]: 2026-01-23 11:56:26.351 185177 DEBUG oslo_concurrency.lockutils [None req-4f818d91-3f9b-4d0b-a9b5-e151893dd14e d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Lock "80c014b261205a8ef2db68f438805c389e810b13" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.118s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 11:56:26 compute-0 nova_compute[185173]: 2026-01-23 11:56:26.351 185177 DEBUG oslo_concurrency.processutils [None req-4f818d91-3f9b-4d0b-a9b5-e151893dd14e d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/80c014b261205a8ef2db68f438805c389e810b13 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:56:26 compute-0 nova_compute[185173]: 2026-01-23 11:56:26.409 185177 DEBUG oslo_concurrency.processutils [None req-4f818d91-3f9b-4d0b-a9b5-e151893dd14e d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/80c014b261205a8ef2db68f438805c389e810b13 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:56:26 compute-0 nova_compute[185173]: 2026-01-23 11:56:26.410 185177 DEBUG nova.virt.disk.api [None req-4f818d91-3f9b-4d0b-a9b5-e151893dd14e d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Checking if we can resize image /var/lib/nova/instances/e9de5be9-383e-4139-a192-9a00ac9030d0/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Jan 23 11:56:26 compute-0 nova_compute[185173]: 2026-01-23 11:56:26.411 185177 DEBUG oslo_concurrency.processutils [None req-4f818d91-3f9b-4d0b-a9b5-e151893dd14e d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e9de5be9-383e-4139-a192-9a00ac9030d0/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:56:26 compute-0 nova_compute[185173]: 2026-01-23 11:56:26.474 185177 DEBUG oslo_concurrency.processutils [None req-4f818d91-3f9b-4d0b-a9b5-e151893dd14e d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e9de5be9-383e-4139-a192-9a00ac9030d0/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:56:26 compute-0 nova_compute[185173]: 2026-01-23 11:56:26.475 185177 DEBUG nova.virt.disk.api [None req-4f818d91-3f9b-4d0b-a9b5-e151893dd14e d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Cannot resize image /var/lib/nova/instances/e9de5be9-383e-4139-a192-9a00ac9030d0/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Jan 23 11:56:26 compute-0 nova_compute[185173]: 2026-01-23 11:56:26.476 185177 DEBUG nova.objects.instance [None req-4f818d91-3f9b-4d0b-a9b5-e151893dd14e d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Lazy-loading 'migration_context' on Instance uuid e9de5be9-383e-4139-a192-9a00ac9030d0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 23 11:56:26 compute-0 nova_compute[185173]: 2026-01-23 11:56:26.674 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:56:27 compute-0 nova_compute[185173]: 2026-01-23 11:56:27.089 185177 DEBUG oslo_concurrency.lockutils [None req-4f818d91-3f9b-4d0b-a9b5-e151893dd14e d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Acquiring lock "/var/lib/nova/instances/e9de5be9-383e-4139-a192-9a00ac9030d0/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 11:56:27 compute-0 nova_compute[185173]: 2026-01-23 11:56:27.090 185177 DEBUG oslo_concurrency.lockutils [None req-4f818d91-3f9b-4d0b-a9b5-e151893dd14e d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Lock "/var/lib/nova/instances/e9de5be9-383e-4139-a192-9a00ac9030d0/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 11:56:27 compute-0 nova_compute[185173]: 2026-01-23 11:56:27.091 185177 DEBUG oslo_concurrency.lockutils [None req-4f818d91-3f9b-4d0b-a9b5-e151893dd14e d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Lock "/var/lib/nova/instances/e9de5be9-383e-4139-a192-9a00ac9030d0/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 11:56:27 compute-0 nova_compute[185173]: 2026-01-23 11:56:27.109 185177 DEBUG oslo_concurrency.processutils [None req-4f818d91-3f9b-4d0b-a9b5-e151893dd14e d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:56:27 compute-0 nova_compute[185173]: 2026-01-23 11:56:27.167 185177 DEBUG oslo_concurrency.processutils [None req-4f818d91-3f9b-4d0b-a9b5-e151893dd14e d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:56:27 compute-0 nova_compute[185173]: 2026-01-23 11:56:27.168 185177 DEBUG oslo_concurrency.lockutils [None req-4f818d91-3f9b-4d0b-a9b5-e151893dd14e d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 11:56:27 compute-0 nova_compute[185173]: 2026-01-23 11:56:27.168 185177 DEBUG oslo_concurrency.lockutils [None req-4f818d91-3f9b-4d0b-a9b5-e151893dd14e d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 11:56:27 compute-0 nova_compute[185173]: 2026-01-23 11:56:27.179 185177 DEBUG oslo_concurrency.processutils [None req-4f818d91-3f9b-4d0b-a9b5-e151893dd14e d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:56:27 compute-0 nova_compute[185173]: 2026-01-23 11:56:27.239 185177 DEBUG oslo_concurrency.processutils [None req-4f818d91-3f9b-4d0b-a9b5-e151893dd14e d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:56:27 compute-0 nova_compute[185173]: 2026-01-23 11:56:27.240 185177 DEBUG oslo_concurrency.processutils [None req-4f818d91-3f9b-4d0b-a9b5-e151893dd14e d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/e9de5be9-383e-4139-a192-9a00ac9030d0/disk.eph0 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:56:27 compute-0 nova_compute[185173]: 2026-01-23 11:56:27.278 185177 DEBUG oslo_concurrency.processutils [None req-4f818d91-3f9b-4d0b-a9b5-e151893dd14e d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/e9de5be9-383e-4139-a192-9a00ac9030d0/disk.eph0 1073741824" returned: 0 in 0.038s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:56:27 compute-0 nova_compute[185173]: 2026-01-23 11:56:27.279 185177 DEBUG oslo_concurrency.lockutils [None req-4f818d91-3f9b-4d0b-a9b5-e151893dd14e d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.110s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 11:56:27 compute-0 nova_compute[185173]: 2026-01-23 11:56:27.279 185177 DEBUG oslo_concurrency.processutils [None req-4f818d91-3f9b-4d0b-a9b5-e151893dd14e d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:56:27 compute-0 nova_compute[185173]: 2026-01-23 11:56:27.334 185177 DEBUG oslo_concurrency.processutils [None req-4f818d91-3f9b-4d0b-a9b5-e151893dd14e d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:56:27 compute-0 nova_compute[185173]: 2026-01-23 11:56:27.335 185177 DEBUG nova.virt.libvirt.driver [None req-4f818d91-3f9b-4d0b-a9b5-e151893dd14e d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: e9de5be9-383e-4139-a192-9a00ac9030d0] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 23 11:56:27 compute-0 nova_compute[185173]: 2026-01-23 11:56:27.335 185177 DEBUG nova.virt.libvirt.driver [None req-4f818d91-3f9b-4d0b-a9b5-e151893dd14e d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: e9de5be9-383e-4139-a192-9a00ac9030d0] Ensure instance console log exists: /var/lib/nova/instances/e9de5be9-383e-4139-a192-9a00ac9030d0/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 23 11:56:27 compute-0 nova_compute[185173]: 2026-01-23 11:56:27.336 185177 DEBUG oslo_concurrency.lockutils [None req-4f818d91-3f9b-4d0b-a9b5-e151893dd14e d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 11:56:27 compute-0 nova_compute[185173]: 2026-01-23 11:56:27.336 185177 DEBUG oslo_concurrency.lockutils [None req-4f818d91-3f9b-4d0b-a9b5-e151893dd14e d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 11:56:27 compute-0 nova_compute[185173]: 2026-01-23 11:56:27.336 185177 DEBUG oslo_concurrency.lockutils [None req-4f818d91-3f9b-4d0b-a9b5-e151893dd14e d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 11:56:28 compute-0 nova_compute[185173]: 2026-01-23 11:56:28.559 185177 DEBUG nova.network.neutron [None req-4f818d91-3f9b-4d0b-a9b5-e151893dd14e d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: e9de5be9-383e-4139-a192-9a00ac9030d0] Successfully updated port: e0cab06b-811c-4fd7-a9ec-dded37a5bfcf _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 23 11:56:28 compute-0 nova_compute[185173]: 2026-01-23 11:56:28.585 185177 DEBUG oslo_concurrency.lockutils [None req-4f818d91-3f9b-4d0b-a9b5-e151893dd14e d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Acquiring lock "refresh_cache-e9de5be9-383e-4139-a192-9a00ac9030d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 23 11:56:28 compute-0 nova_compute[185173]: 2026-01-23 11:56:28.585 185177 DEBUG oslo_concurrency.lockutils [None req-4f818d91-3f9b-4d0b-a9b5-e151893dd14e d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Acquired lock "refresh_cache-e9de5be9-383e-4139-a192-9a00ac9030d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 23 11:56:28 compute-0 nova_compute[185173]: 2026-01-23 11:56:28.585 185177 DEBUG nova.network.neutron [None req-4f818d91-3f9b-4d0b-a9b5-e151893dd14e d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: e9de5be9-383e-4139-a192-9a00ac9030d0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 23 11:56:28 compute-0 nova_compute[185173]: 2026-01-23 11:56:28.764 185177 DEBUG nova.compute.manager [req-c7d1d2cf-077e-4926-8e7d-7520489a1c27 req-9c523947-fec8-4b71-992f-7d9f6b8a62d7 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] [instance: e9de5be9-383e-4139-a192-9a00ac9030d0] Received event network-changed-e0cab06b-811c-4fd7-a9ec-dded37a5bfcf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 23 11:56:28 compute-0 nova_compute[185173]: 2026-01-23 11:56:28.764 185177 DEBUG nova.compute.manager [req-c7d1d2cf-077e-4926-8e7d-7520489a1c27 req-9c523947-fec8-4b71-992f-7d9f6b8a62d7 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] [instance: e9de5be9-383e-4139-a192-9a00ac9030d0] Refreshing instance network info cache due to event network-changed-e0cab06b-811c-4fd7-a9ec-dded37a5bfcf. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 23 11:56:28 compute-0 nova_compute[185173]: 2026-01-23 11:56:28.764 185177 DEBUG oslo_concurrency.lockutils [req-c7d1d2cf-077e-4926-8e7d-7520489a1c27 req-9c523947-fec8-4b71-992f-7d9f6b8a62d7 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] Acquiring lock "refresh_cache-e9de5be9-383e-4139-a192-9a00ac9030d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 23 11:56:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:56:29.101 106832 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 11:56:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:56:29.101 106832 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 11:56:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:56:29.102 106832 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 11:56:29 compute-0 nova_compute[185173]: 2026-01-23 11:56:29.132 185177 DEBUG nova.network.neutron [None req-4f818d91-3f9b-4d0b-a9b5-e151893dd14e d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: e9de5be9-383e-4139-a192-9a00ac9030d0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 23 11:56:29 compute-0 podman[201022]: time="2026-01-23T11:56:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 23 11:56:29 compute-0 podman[241925]: 2026-01-23 11:56:29.741760102 +0000 UTC m=+0.071119321 container health_status 6ec039018dddd109dd56b3f3912ce4a80c166b5fb98c417c5e3cfbbdfbfbeaad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260120, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, tcib_managed=true, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 23 11:56:29 compute-0 podman[201022]: @ - - [23/Jan/2026:11:56:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28508 "" "Go-http-client/1.1"
Jan 23 11:56:29 compute-0 podman[201022]: @ - - [23/Jan/2026:11:56:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4378 "" "Go-http-client/1.1"
Jan 23 11:56:29 compute-0 podman[241924]: 2026-01-23 11:56:29.752431988 +0000 UTC m=+0.086048159 container health_status 48bfd3e93cfb033a8917f154ab637a84f3f60f7609564292c230ce848bae7693 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 23 11:56:30 compute-0 nova_compute[185173]: 2026-01-23 11:56:30.252 185177 DEBUG nova.network.neutron [None req-4f818d91-3f9b-4d0b-a9b5-e151893dd14e d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: e9de5be9-383e-4139-a192-9a00ac9030d0] Updating instance_info_cache with network_info: [{"id": "e0cab06b-811c-4fd7-a9ec-dded37a5bfcf", "address": "fa:16:3e:c3:4d:2b", "network": {"id": "9d2c33ef-0f52-43b5-80dd-899657aece53", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.35", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bd16a0de2f5e4a8480a855ef0e1a3f14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0cab06b-81", "ovs_interfaceid": "e0cab06b-811c-4fd7-a9ec-dded37a5bfcf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 23 11:56:30 compute-0 nova_compute[185173]: 2026-01-23 11:56:30.269 185177 DEBUG oslo_concurrency.lockutils [None req-4f818d91-3f9b-4d0b-a9b5-e151893dd14e d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Releasing lock "refresh_cache-e9de5be9-383e-4139-a192-9a00ac9030d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 23 11:56:30 compute-0 nova_compute[185173]: 2026-01-23 11:56:30.270 185177 DEBUG nova.compute.manager [None req-4f818d91-3f9b-4d0b-a9b5-e151893dd14e d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: e9de5be9-383e-4139-a192-9a00ac9030d0] Instance network_info: |[{"id": "e0cab06b-811c-4fd7-a9ec-dded37a5bfcf", "address": "fa:16:3e:c3:4d:2b", "network": {"id": "9d2c33ef-0f52-43b5-80dd-899657aece53", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.35", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bd16a0de2f5e4a8480a855ef0e1a3f14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0cab06b-81", "ovs_interfaceid": "e0cab06b-811c-4fd7-a9ec-dded37a5bfcf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 23 11:56:30 compute-0 nova_compute[185173]: 2026-01-23 11:56:30.271 185177 DEBUG oslo_concurrency.lockutils [req-c7d1d2cf-077e-4926-8e7d-7520489a1c27 req-9c523947-fec8-4b71-992f-7d9f6b8a62d7 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] Acquired lock "refresh_cache-e9de5be9-383e-4139-a192-9a00ac9030d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 23 11:56:30 compute-0 nova_compute[185173]: 2026-01-23 11:56:30.271 185177 DEBUG nova.network.neutron [req-c7d1d2cf-077e-4926-8e7d-7520489a1c27 req-9c523947-fec8-4b71-992f-7d9f6b8a62d7 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] [instance: e9de5be9-383e-4139-a192-9a00ac9030d0] Refreshing network info cache for port e0cab06b-811c-4fd7-a9ec-dded37a5bfcf _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 23 11:56:30 compute-0 nova_compute[185173]: 2026-01-23 11:56:30.275 185177 DEBUG nova.virt.libvirt.driver [None req-4f818d91-3f9b-4d0b-a9b5-e151893dd14e d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: e9de5be9-383e-4139-a192-9a00ac9030d0] Start _get_guest_xml network_info=[{"id": "e0cab06b-811c-4fd7-a9ec-dded37a5bfcf", "address": "fa:16:3e:c3:4d:2b", "network": {"id": "9d2c33ef-0f52-43b5-80dd-899657aece53", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.35", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bd16a0de2f5e4a8480a855ef0e1a3f14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0cab06b-81", "ovs_interfaceid": "e0cab06b-811c-4fd7-a9ec-dded37a5bfcf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2026-01-23T11:45:38Z,direct_url=<?>,disk_format='qcow2',id=c5833e41-b4db-454e-8f49-014aa18c7dc5,min_disk=0,min_ram=0,name='cirros',owner='bd16a0de2f5e4a8480a855ef0e1a3f14',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2026-01-23T11:45:39Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'disk_bus': 'virtio', 'encrypted': False, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'size': 0, 'encryption_options': None, 'encryption_secret_uuid': None, 'boot_index': 0, 'image_id': 'c5833e41-b4db-454e-8f49-014aa18c7dc5'}], 'ephemerals': [{'encryption_secret_uuid': None, 'encryption_format': None, 'disk_bus': 'virtio', 'encrypted': False, 'device_type': 'disk', 'device_name': '/dev/vdb', 'size': 1, 'encryption_options': None, 'guest_format': None}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 23 11:56:30 compute-0 nova_compute[185173]: 2026-01-23 11:56:30.281 185177 WARNING nova.virt.libvirt.driver [None req-4f818d91-3f9b-4d0b-a9b5-e151893dd14e d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 23 11:56:30 compute-0 nova_compute[185173]: 2026-01-23 11:56:30.290 185177 DEBUG nova.virt.libvirt.host [None req-4f818d91-3f9b-4d0b-a9b5-e151893dd14e d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 23 11:56:30 compute-0 nova_compute[185173]: 2026-01-23 11:56:30.291 185177 DEBUG nova.virt.libvirt.host [None req-4f818d91-3f9b-4d0b-a9b5-e151893dd14e d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 23 11:56:30 compute-0 nova_compute[185173]: 2026-01-23 11:56:30.298 185177 DEBUG nova.virt.libvirt.host [None req-4f818d91-3f9b-4d0b-a9b5-e151893dd14e d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 23 11:56:30 compute-0 nova_compute[185173]: 2026-01-23 11:56:30.299 185177 DEBUG nova.virt.libvirt.host [None req-4f818d91-3f9b-4d0b-a9b5-e151893dd14e d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 23 11:56:30 compute-0 nova_compute[185173]: 2026-01-23 11:56:30.299 185177 DEBUG nova.virt.libvirt.driver [None req-4f818d91-3f9b-4d0b-a9b5-e151893dd14e d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 23 11:56:30 compute-0 nova_compute[185173]: 2026-01-23 11:56:30.299 185177 DEBUG nova.virt.hardware [None req-4f818d91-3f9b-4d0b-a9b5-e151893dd14e d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-23T11:45:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='f2c5c5dd-a580-4885-a3ab-a766eac401c8',id=1,is_public=True,memory_mb=512,name='m1.small',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2026-01-23T11:45:38Z,direct_url=<?>,disk_format='qcow2',id=c5833e41-b4db-454e-8f49-014aa18c7dc5,min_disk=0,min_ram=0,name='cirros',owner='bd16a0de2f5e4a8480a855ef0e1a3f14',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2026-01-23T11:45:39Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 23 11:56:30 compute-0 nova_compute[185173]: 2026-01-23 11:56:30.300 185177 DEBUG nova.virt.hardware [None req-4f818d91-3f9b-4d0b-a9b5-e151893dd14e d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 23 11:56:30 compute-0 nova_compute[185173]: 2026-01-23 11:56:30.300 185177 DEBUG nova.virt.hardware [None req-4f818d91-3f9b-4d0b-a9b5-e151893dd14e d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 23 11:56:30 compute-0 nova_compute[185173]: 2026-01-23 11:56:30.300 185177 DEBUG nova.virt.hardware [None req-4f818d91-3f9b-4d0b-a9b5-e151893dd14e d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 23 11:56:30 compute-0 nova_compute[185173]: 2026-01-23 11:56:30.301 185177 DEBUG nova.virt.hardware [None req-4f818d91-3f9b-4d0b-a9b5-e151893dd14e d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 23 11:56:30 compute-0 nova_compute[185173]: 2026-01-23 11:56:30.301 185177 DEBUG nova.virt.hardware [None req-4f818d91-3f9b-4d0b-a9b5-e151893dd14e d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 23 11:56:30 compute-0 nova_compute[185173]: 2026-01-23 11:56:30.301 185177 DEBUG nova.virt.hardware [None req-4f818d91-3f9b-4d0b-a9b5-e151893dd14e d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 23 11:56:30 compute-0 nova_compute[185173]: 2026-01-23 11:56:30.302 185177 DEBUG nova.virt.hardware [None req-4f818d91-3f9b-4d0b-a9b5-e151893dd14e d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 23 11:56:30 compute-0 nova_compute[185173]: 2026-01-23 11:56:30.302 185177 DEBUG nova.virt.hardware [None req-4f818d91-3f9b-4d0b-a9b5-e151893dd14e d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 23 11:56:30 compute-0 nova_compute[185173]: 2026-01-23 11:56:30.302 185177 DEBUG nova.virt.hardware [None req-4f818d91-3f9b-4d0b-a9b5-e151893dd14e d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 23 11:56:30 compute-0 nova_compute[185173]: 2026-01-23 11:56:30.303 185177 DEBUG nova.virt.hardware [None req-4f818d91-3f9b-4d0b-a9b5-e151893dd14e d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 23 11:56:30 compute-0 nova_compute[185173]: 2026-01-23 11:56:30.306 185177 DEBUG nova.virt.libvirt.vif [None req-4f818d91-3f9b-4d0b-a9b5-e151893dd14e d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-23T11:56:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-i4gqh4k-b64ilmmiw3co-dxxhdi3z36fs-vnf-e3wngllyc55g',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-i4gqh4k-b64ilmmiw3co-dxxhdi3z36fs-vnf-e3wngllyc55g',id=4,image_ref='c5833e41-b4db-454e-8f49-014aa18c7dc5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='500baa09-1e39-474e-b275-8b2dffe3a65b'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bd16a0de2f5e4a8480a855ef0e1a3f14',ramdisk_id='',reservation_id='r-ebfyk188',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,admin,reader',image_base_image_ref='c5833e41-b4db-454e-8f49-014aa18c7dc5',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T11:56:26Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT04OTE5NDIxNTQ1NzU3NzgzOTQ1PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTg5MTk0MjE1NDU3NTc3ODM5NDU9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09ODkxOTQyMTU0NTc1Nzc4Mzk0NT09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTg5MTk0MjE1NDU3NTc3ODM5NDU9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT04OTE5NDIxNTQ1NzU3NzgzOTQ1PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT04OTE5NDIxNTQ1NzU3NzgzOTQ1PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJnc
Jan 23 11:56:30 compute-0 nova_compute[185173]: ywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09ODkxOTQyMTU0NTc1Nzc4Mzk0NT09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTg5MTk0MjE1NDU3NTc3ODM5NDU9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT04OTE5NDIxNTQ1NzU3NzgzOTQ1PT0tLQo=',user_id='d9858533c2284846a8f0f19a1fb45045',uuid=e9de5be9-383e-4139-a192-9a00ac9030d0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e0cab06b-811c-4fd7-a9ec-dded37a5bfcf", "address": "fa:16:3e:c3:4d:2b", "network": {"id": "9d2c33ef-0f52-43b5-80dd-899657aece53", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.35", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bd16a0de2f5e4a8480a855ef0e1a3f14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0cab06b-81", "ovs_interfaceid": "e0cab06b-811c-4fd7-a9ec-dded37a5bfcf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 23 11:56:30 compute-0 nova_compute[185173]: 2026-01-23 11:56:30.307 185177 DEBUG nova.network.os_vif_util [None req-4f818d91-3f9b-4d0b-a9b5-e151893dd14e d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Converting VIF {"id": "e0cab06b-811c-4fd7-a9ec-dded37a5bfcf", "address": "fa:16:3e:c3:4d:2b", "network": {"id": "9d2c33ef-0f52-43b5-80dd-899657aece53", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.35", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bd16a0de2f5e4a8480a855ef0e1a3f14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0cab06b-81", "ovs_interfaceid": "e0cab06b-811c-4fd7-a9ec-dded37a5bfcf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 23 11:56:30 compute-0 nova_compute[185173]: 2026-01-23 11:56:30.308 185177 DEBUG nova.network.os_vif_util [None req-4f818d91-3f9b-4d0b-a9b5-e151893dd14e d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c3:4d:2b,bridge_name='br-int',has_traffic_filtering=True,id=e0cab06b-811c-4fd7-a9ec-dded37a5bfcf,network=Network(9d2c33ef-0f52-43b5-80dd-899657aece53),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tape0cab06b-81') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 23 11:56:30 compute-0 nova_compute[185173]: 2026-01-23 11:56:30.309 185177 DEBUG nova.objects.instance [None req-4f818d91-3f9b-4d0b-a9b5-e151893dd14e d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Lazy-loading 'pci_devices' on Instance uuid e9de5be9-383e-4139-a192-9a00ac9030d0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 23 11:56:30 compute-0 nova_compute[185173]: 2026-01-23 11:56:30.328 185177 DEBUG nova.virt.libvirt.driver [None req-4f818d91-3f9b-4d0b-a9b5-e151893dd14e d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: e9de5be9-383e-4139-a192-9a00ac9030d0] End _get_guest_xml xml=<domain type="kvm">
Jan 23 11:56:30 compute-0 nova_compute[185173]:   <uuid>e9de5be9-383e-4139-a192-9a00ac9030d0</uuid>
Jan 23 11:56:30 compute-0 nova_compute[185173]:   <name>instance-00000004</name>
Jan 23 11:56:30 compute-0 nova_compute[185173]:   <memory>524288</memory>
Jan 23 11:56:30 compute-0 nova_compute[185173]:   <vcpu>1</vcpu>
Jan 23 11:56:30 compute-0 nova_compute[185173]:   <metadata>
Jan 23 11:56:30 compute-0 nova_compute[185173]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 23 11:56:30 compute-0 nova_compute[185173]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 23 11:56:30 compute-0 nova_compute[185173]:       <nova:name>vn-i4gqh4k-b64ilmmiw3co-dxxhdi3z36fs-vnf-e3wngllyc55g</nova:name>
Jan 23 11:56:30 compute-0 nova_compute[185173]:       <nova:creationTime>2026-01-23 11:56:30</nova:creationTime>
Jan 23 11:56:30 compute-0 nova_compute[185173]:       <nova:flavor name="m1.small">
Jan 23 11:56:30 compute-0 nova_compute[185173]:         <nova:memory>512</nova:memory>
Jan 23 11:56:30 compute-0 nova_compute[185173]:         <nova:disk>1</nova:disk>
Jan 23 11:56:30 compute-0 nova_compute[185173]:         <nova:swap>0</nova:swap>
Jan 23 11:56:30 compute-0 nova_compute[185173]:         <nova:ephemeral>1</nova:ephemeral>
Jan 23 11:56:30 compute-0 nova_compute[185173]:         <nova:vcpus>1</nova:vcpus>
Jan 23 11:56:30 compute-0 nova_compute[185173]:       </nova:flavor>
Jan 23 11:56:30 compute-0 nova_compute[185173]:       <nova:owner>
Jan 23 11:56:30 compute-0 nova_compute[185173]:         <nova:user uuid="d9858533c2284846a8f0f19a1fb45045">admin</nova:user>
Jan 23 11:56:30 compute-0 nova_compute[185173]:         <nova:project uuid="bd16a0de2f5e4a8480a855ef0e1a3f14">admin</nova:project>
Jan 23 11:56:30 compute-0 nova_compute[185173]:       </nova:owner>
Jan 23 11:56:30 compute-0 nova_compute[185173]:       <nova:root type="image" uuid="c5833e41-b4db-454e-8f49-014aa18c7dc5"/>
Jan 23 11:56:30 compute-0 nova_compute[185173]:       <nova:ports>
Jan 23 11:56:30 compute-0 nova_compute[185173]:         <nova:port uuid="e0cab06b-811c-4fd7-a9ec-dded37a5bfcf">
Jan 23 11:56:30 compute-0 nova_compute[185173]:           <nova:ip type="fixed" address="192.168.0.35" ipVersion="4"/>
Jan 23 11:56:30 compute-0 nova_compute[185173]:         </nova:port>
Jan 23 11:56:30 compute-0 nova_compute[185173]:       </nova:ports>
Jan 23 11:56:30 compute-0 nova_compute[185173]:     </nova:instance>
Jan 23 11:56:30 compute-0 nova_compute[185173]:   </metadata>
Jan 23 11:56:30 compute-0 nova_compute[185173]:   <sysinfo type="smbios">
Jan 23 11:56:30 compute-0 nova_compute[185173]:     <system>
Jan 23 11:56:30 compute-0 nova_compute[185173]:       <entry name="manufacturer">RDO</entry>
Jan 23 11:56:30 compute-0 nova_compute[185173]:       <entry name="product">OpenStack Compute</entry>
Jan 23 11:56:30 compute-0 nova_compute[185173]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 23 11:56:30 compute-0 nova_compute[185173]:       <entry name="serial">e9de5be9-383e-4139-a192-9a00ac9030d0</entry>
Jan 23 11:56:30 compute-0 nova_compute[185173]:       <entry name="uuid">e9de5be9-383e-4139-a192-9a00ac9030d0</entry>
Jan 23 11:56:30 compute-0 nova_compute[185173]:       <entry name="family">Virtual Machine</entry>
Jan 23 11:56:30 compute-0 nova_compute[185173]:     </system>
Jan 23 11:56:30 compute-0 nova_compute[185173]:   </sysinfo>
Jan 23 11:56:30 compute-0 nova_compute[185173]:   <os>
Jan 23 11:56:30 compute-0 nova_compute[185173]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 23 11:56:30 compute-0 nova_compute[185173]:     <boot dev="hd"/>
Jan 23 11:56:30 compute-0 nova_compute[185173]:     <smbios mode="sysinfo"/>
Jan 23 11:56:30 compute-0 nova_compute[185173]:   </os>
Jan 23 11:56:30 compute-0 nova_compute[185173]:   <features>
Jan 23 11:56:30 compute-0 nova_compute[185173]:     <acpi/>
Jan 23 11:56:30 compute-0 nova_compute[185173]:     <apic/>
Jan 23 11:56:30 compute-0 nova_compute[185173]:     <vmcoreinfo/>
Jan 23 11:56:30 compute-0 nova_compute[185173]:   </features>
Jan 23 11:56:30 compute-0 nova_compute[185173]:   <clock offset="utc">
Jan 23 11:56:30 compute-0 nova_compute[185173]:     <timer name="pit" tickpolicy="delay"/>
Jan 23 11:56:30 compute-0 nova_compute[185173]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 23 11:56:30 compute-0 nova_compute[185173]:     <timer name="hpet" present="no"/>
Jan 23 11:56:30 compute-0 nova_compute[185173]:   </clock>
Jan 23 11:56:30 compute-0 nova_compute[185173]:   <cpu mode="host-model" match="exact">
Jan 23 11:56:30 compute-0 nova_compute[185173]:     <topology sockets="1" cores="1" threads="1"/>
Jan 23 11:56:30 compute-0 nova_compute[185173]:   </cpu>
Jan 23 11:56:30 compute-0 nova_compute[185173]:   <devices>
Jan 23 11:56:30 compute-0 nova_compute[185173]:     <disk type="file" device="disk">
Jan 23 11:56:30 compute-0 nova_compute[185173]:       <driver name="qemu" type="qcow2" cache="none"/>
Jan 23 11:56:30 compute-0 nova_compute[185173]:       <source file="/var/lib/nova/instances/e9de5be9-383e-4139-a192-9a00ac9030d0/disk"/>
Jan 23 11:56:30 compute-0 nova_compute[185173]:       <target dev="vda" bus="virtio"/>
Jan 23 11:56:30 compute-0 nova_compute[185173]:     </disk>
Jan 23 11:56:30 compute-0 nova_compute[185173]:     <disk type="file" device="disk">
Jan 23 11:56:30 compute-0 nova_compute[185173]:       <driver name="qemu" type="qcow2" cache="none"/>
Jan 23 11:56:30 compute-0 nova_compute[185173]:       <source file="/var/lib/nova/instances/e9de5be9-383e-4139-a192-9a00ac9030d0/disk.eph0"/>
Jan 23 11:56:30 compute-0 nova_compute[185173]:       <target dev="vdb" bus="virtio"/>
Jan 23 11:56:30 compute-0 nova_compute[185173]:     </disk>
Jan 23 11:56:30 compute-0 nova_compute[185173]:     <disk type="file" device="cdrom">
Jan 23 11:56:30 compute-0 nova_compute[185173]:       <driver name="qemu" type="raw" cache="none"/>
Jan 23 11:56:30 compute-0 nova_compute[185173]:       <source file="/var/lib/nova/instances/e9de5be9-383e-4139-a192-9a00ac9030d0/disk.config"/>
Jan 23 11:56:30 compute-0 nova_compute[185173]:       <target dev="sda" bus="sata"/>
Jan 23 11:56:30 compute-0 nova_compute[185173]:     </disk>
Jan 23 11:56:30 compute-0 nova_compute[185173]:     <interface type="ethernet">
Jan 23 11:56:30 compute-0 nova_compute[185173]:       <mac address="fa:16:3e:c3:4d:2b"/>
Jan 23 11:56:30 compute-0 nova_compute[185173]:       <model type="virtio"/>
Jan 23 11:56:30 compute-0 nova_compute[185173]:       <driver name="vhost" rx_queue_size="512"/>
Jan 23 11:56:30 compute-0 nova_compute[185173]:       <mtu size="1442"/>
Jan 23 11:56:30 compute-0 nova_compute[185173]:       <target dev="tape0cab06b-81"/>
Jan 23 11:56:30 compute-0 nova_compute[185173]:     </interface>
Jan 23 11:56:30 compute-0 nova_compute[185173]:     <serial type="pty">
Jan 23 11:56:30 compute-0 nova_compute[185173]:       <log file="/var/lib/nova/instances/e9de5be9-383e-4139-a192-9a00ac9030d0/console.log" append="off"/>
Jan 23 11:56:30 compute-0 nova_compute[185173]:     </serial>
Jan 23 11:56:30 compute-0 nova_compute[185173]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 23 11:56:30 compute-0 nova_compute[185173]:     <video>
Jan 23 11:56:30 compute-0 nova_compute[185173]:       <model type="virtio"/>
Jan 23 11:56:30 compute-0 nova_compute[185173]:     </video>
Jan 23 11:56:30 compute-0 nova_compute[185173]:     <input type="tablet" bus="usb"/>
Jan 23 11:56:30 compute-0 nova_compute[185173]:     <rng model="virtio">
Jan 23 11:56:30 compute-0 nova_compute[185173]:       <backend model="random">/dev/urandom</backend>
Jan 23 11:56:30 compute-0 nova_compute[185173]:     </rng>
Jan 23 11:56:30 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root"/>
Jan 23 11:56:30 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 11:56:30 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 11:56:30 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 11:56:30 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 11:56:30 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 11:56:30 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 11:56:30 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 11:56:30 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 11:56:30 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 11:56:30 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 11:56:30 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 11:56:30 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 11:56:30 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 11:56:30 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 11:56:30 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 11:56:30 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 11:56:30 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 11:56:30 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 11:56:30 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 11:56:30 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 11:56:30 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 11:56:30 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 11:56:30 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 11:56:30 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 11:56:30 compute-0 nova_compute[185173]:     <controller type="usb" index="0"/>
Jan 23 11:56:30 compute-0 nova_compute[185173]:     <memballoon model="virtio">
Jan 23 11:56:30 compute-0 nova_compute[185173]:       <stats period="10"/>
Jan 23 11:56:30 compute-0 nova_compute[185173]:     </memballoon>
Jan 23 11:56:30 compute-0 nova_compute[185173]:   </devices>
Jan 23 11:56:30 compute-0 nova_compute[185173]: </domain>
Jan 23 11:56:30 compute-0 nova_compute[185173]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 23 11:56:30 compute-0 nova_compute[185173]: 2026-01-23 11:56:30.329 185177 DEBUG nova.compute.manager [None req-4f818d91-3f9b-4d0b-a9b5-e151893dd14e d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: e9de5be9-383e-4139-a192-9a00ac9030d0] Preparing to wait for external event network-vif-plugged-e0cab06b-811c-4fd7-a9ec-dded37a5bfcf prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 23 11:56:30 compute-0 nova_compute[185173]: 2026-01-23 11:56:30.330 185177 DEBUG oslo_concurrency.lockutils [None req-4f818d91-3f9b-4d0b-a9b5-e151893dd14e d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Acquiring lock "e9de5be9-383e-4139-a192-9a00ac9030d0-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 11:56:30 compute-0 nova_compute[185173]: 2026-01-23 11:56:30.330 185177 DEBUG oslo_concurrency.lockutils [None req-4f818d91-3f9b-4d0b-a9b5-e151893dd14e d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Lock "e9de5be9-383e-4139-a192-9a00ac9030d0-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 11:56:30 compute-0 nova_compute[185173]: 2026-01-23 11:56:30.330 185177 DEBUG oslo_concurrency.lockutils [None req-4f818d91-3f9b-4d0b-a9b5-e151893dd14e d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Lock "e9de5be9-383e-4139-a192-9a00ac9030d0-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 11:56:30 compute-0 nova_compute[185173]: 2026-01-23 11:56:30.331 185177 DEBUG nova.virt.libvirt.vif [None req-4f818d91-3f9b-4d0b-a9b5-e151893dd14e d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-23T11:56:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-i4gqh4k-b64ilmmiw3co-dxxhdi3z36fs-vnf-e3wngllyc55g',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-i4gqh4k-b64ilmmiw3co-dxxhdi3z36fs-vnf-e3wngllyc55g',id=4,image_ref='c5833e41-b4db-454e-8f49-014aa18c7dc5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='500baa09-1e39-474e-b275-8b2dffe3a65b'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bd16a0de2f5e4a8480a855ef0e1a3f14',ramdisk_id='',reservation_id='r-ebfyk188',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,admin,reader',image_base_image_ref='c5833e41-b4db-454e-8f49-014aa18c7dc5',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T11:56:26Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT04OTE5NDIxNTQ1NzU3NzgzOTQ1PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTg5MTk0MjE1NDU3NTc3ODM5NDU9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09ODkxOTQyMTU0NTc1Nzc4Mzk0NT09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTg5MTk0MjE1NDU3NTc3ODM5NDU9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT04OTE5NDIxNTQ1NzU3NzgzOTQ1PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT04OTE5NDIxNTQ1NzU3NzgzOTQ1PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9
Jan 23 11:56:30 compute-0 nova_compute[185173]: wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09ODkxOTQyMTU0NTc1Nzc4Mzk0NT09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTg5MTk0MjE1NDU3NTc3ODM5NDU9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT04OTE5NDIxNTQ1NzU3NzgzOTQ1PT0tLQo=',user_id='d9858533c2284846a8f0f19a1fb45045',uuid=e9de5be9-383e-4139-a192-9a00ac9030d0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e0cab06b-811c-4fd7-a9ec-dded37a5bfcf", "address": "fa:16:3e:c3:4d:2b", "network": {"id": "9d2c33ef-0f52-43b5-80dd-899657aece53", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.35", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bd16a0de2f5e4a8480a855ef0e1a3f14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0cab06b-81", "ovs_interfaceid": "e0cab06b-811c-4fd7-a9ec-dded37a5bfcf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 23 11:56:30 compute-0 nova_compute[185173]: 2026-01-23 11:56:30.332 185177 DEBUG nova.network.os_vif_util [None req-4f818d91-3f9b-4d0b-a9b5-e151893dd14e d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Converting VIF {"id": "e0cab06b-811c-4fd7-a9ec-dded37a5bfcf", "address": "fa:16:3e:c3:4d:2b", "network": {"id": "9d2c33ef-0f52-43b5-80dd-899657aece53", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.35", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bd16a0de2f5e4a8480a855ef0e1a3f14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0cab06b-81", "ovs_interfaceid": "e0cab06b-811c-4fd7-a9ec-dded37a5bfcf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 23 11:56:30 compute-0 nova_compute[185173]: 2026-01-23 11:56:30.333 185177 DEBUG nova.network.os_vif_util [None req-4f818d91-3f9b-4d0b-a9b5-e151893dd14e d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c3:4d:2b,bridge_name='br-int',has_traffic_filtering=True,id=e0cab06b-811c-4fd7-a9ec-dded37a5bfcf,network=Network(9d2c33ef-0f52-43b5-80dd-899657aece53),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tape0cab06b-81') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 23 11:56:30 compute-0 nova_compute[185173]: 2026-01-23 11:56:30.333 185177 DEBUG os_vif [None req-4f818d91-3f9b-4d0b-a9b5-e151893dd14e d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c3:4d:2b,bridge_name='br-int',has_traffic_filtering=True,id=e0cab06b-811c-4fd7-a9ec-dded37a5bfcf,network=Network(9d2c33ef-0f52-43b5-80dd-899657aece53),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tape0cab06b-81') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 23 11:56:30 compute-0 nova_compute[185173]: 2026-01-23 11:56:30.334 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:56:30 compute-0 nova_compute[185173]: 2026-01-23 11:56:30.334 185177 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 23 11:56:30 compute-0 nova_compute[185173]: 2026-01-23 11:56:30.335 185177 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 23 11:56:30 compute-0 nova_compute[185173]: 2026-01-23 11:56:30.338 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:56:30 compute-0 nova_compute[185173]: 2026-01-23 11:56:30.339 185177 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape0cab06b-81, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 23 11:56:30 compute-0 nova_compute[185173]: 2026-01-23 11:56:30.340 185177 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape0cab06b-81, col_values=(('external_ids', {'iface-id': 'e0cab06b-811c-4fd7-a9ec-dded37a5bfcf', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c3:4d:2b', 'vm-uuid': 'e9de5be9-383e-4139-a192-9a00ac9030d0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 23 11:56:30 compute-0 nova_compute[185173]: 2026-01-23 11:56:30.341 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:56:30 compute-0 NetworkManager[56133]: <info>  [1769169390.3428] manager: (tape0cab06b-81): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/31)
Jan 23 11:56:30 compute-0 nova_compute[185173]: 2026-01-23 11:56:30.343 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 23 11:56:30 compute-0 nova_compute[185173]: 2026-01-23 11:56:30.351 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:56:30 compute-0 nova_compute[185173]: 2026-01-23 11:56:30.352 185177 INFO os_vif [None req-4f818d91-3f9b-4d0b-a9b5-e151893dd14e d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c3:4d:2b,bridge_name='br-int',has_traffic_filtering=True,id=e0cab06b-811c-4fd7-a9ec-dded37a5bfcf,network=Network(9d2c33ef-0f52-43b5-80dd-899657aece53),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tape0cab06b-81')
Jan 23 11:56:30 compute-0 nova_compute[185173]: 2026-01-23 11:56:30.410 185177 DEBUG nova.virt.libvirt.driver [None req-4f818d91-3f9b-4d0b-a9b5-e151893dd14e d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 23 11:56:30 compute-0 nova_compute[185173]: 2026-01-23 11:56:30.411 185177 DEBUG nova.virt.libvirt.driver [None req-4f818d91-3f9b-4d0b-a9b5-e151893dd14e d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 23 11:56:30 compute-0 nova_compute[185173]: 2026-01-23 11:56:30.411 185177 DEBUG nova.virt.libvirt.driver [None req-4f818d91-3f9b-4d0b-a9b5-e151893dd14e d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 23 11:56:30 compute-0 nova_compute[185173]: 2026-01-23 11:56:30.412 185177 DEBUG nova.virt.libvirt.driver [None req-4f818d91-3f9b-4d0b-a9b5-e151893dd14e d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] No VIF found with MAC fa:16:3e:c3:4d:2b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 23 11:56:30 compute-0 nova_compute[185173]: 2026-01-23 11:56:30.412 185177 INFO nova.virt.libvirt.driver [None req-4f818d91-3f9b-4d0b-a9b5-e151893dd14e d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: e9de5be9-383e-4139-a192-9a00ac9030d0] Using config drive
Jan 23 11:56:30 compute-0 rsyslogd[235472]: message too long (8192) with configured size 8096, begin of message is: 2026-01-23 11:56:30.306 185177 DEBUG nova.virt.libvirt.vif [None req-4f818d91-3f [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 23 11:56:30 compute-0 rsyslogd[235472]: message too long (8192) with configured size 8096, begin of message is: 2026-01-23 11:56:30.331 185177 DEBUG nova.virt.libvirt.vif [None req-4f818d91-3f [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 23 11:56:31 compute-0 nova_compute[185173]: 2026-01-23 11:56:31.176 185177 INFO nova.virt.libvirt.driver [None req-4f818d91-3f9b-4d0b-a9b5-e151893dd14e d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: e9de5be9-383e-4139-a192-9a00ac9030d0] Creating config drive at /var/lib/nova/instances/e9de5be9-383e-4139-a192-9a00ac9030d0/disk.config
Jan 23 11:56:31 compute-0 nova_compute[185173]: 2026-01-23 11:56:31.182 185177 DEBUG oslo_concurrency.processutils [None req-4f818d91-3f9b-4d0b-a9b5-e151893dd14e d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e9de5be9-383e-4139-a192-9a00ac9030d0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbw337tsq execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:56:31 compute-0 nova_compute[185173]: 2026-01-23 11:56:31.310 185177 DEBUG oslo_concurrency.processutils [None req-4f818d91-3f9b-4d0b-a9b5-e151893dd14e d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e9de5be9-383e-4139-a192-9a00ac9030d0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbw337tsq" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:56:31 compute-0 kernel: tape0cab06b-81: entered promiscuous mode
Jan 23 11:56:31 compute-0 NetworkManager[56133]: <info>  [1769169391.3980] manager: (tape0cab06b-81): new Tun device (/org/freedesktop/NetworkManager/Devices/32)
Jan 23 11:56:31 compute-0 ovn_controller[97581]: 2026-01-23T11:56:31Z|00045|binding|INFO|Claiming lport e0cab06b-811c-4fd7-a9ec-dded37a5bfcf for this chassis.
Jan 23 11:56:31 compute-0 ovn_controller[97581]: 2026-01-23T11:56:31Z|00046|binding|INFO|e0cab06b-811c-4fd7-a9ec-dded37a5bfcf: Claiming fa:16:3e:c3:4d:2b 192.168.0.35
Jan 23 11:56:31 compute-0 nova_compute[185173]: 2026-01-23 11:56:31.403 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:56:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:56:31.413 106832 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c3:4d:2b 192.168.0.35'], port_security=['fa:16:3e:c3:4d:2b 192.168.0.35'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-wvvtbi4gqh4k-b64ilmmiw3co-dxxhdi3z36fs-port-2konbamiqogw', 'neutron:cidrs': '192.168.0.35/24', 'neutron:device_id': 'e9de5be9-383e-4139-a192-9a00ac9030d0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9d2c33ef-0f52-43b5-80dd-899657aece53', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-wvvtbi4gqh4k-b64ilmmiw3co-dxxhdi3z36fs-port-2konbamiqogw', 'neutron:project_id': 'bd16a0de2f5e4a8480a855ef0e1a3f14', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd2fa655b-b17a-4411-ab93-c6585edc77dc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.210'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=488b21ee-cabd-4ebf-9089-c8262ea2e5e6, chassis=[<ovs.db.idl.Row object at 0x7fceaba80790>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fceaba80790>], logical_port=e0cab06b-811c-4fd7-a9ec-dded37a5bfcf) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 23 11:56:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:56:31.414 106832 INFO neutron.agent.ovn.metadata.agent [-] Port e0cab06b-811c-4fd7-a9ec-dded37a5bfcf in datapath 9d2c33ef-0f52-43b5-80dd-899657aece53 bound to our chassis
Jan 23 11:56:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:56:31.415 106832 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9d2c33ef-0f52-43b5-80dd-899657aece53
Jan 23 11:56:31 compute-0 ovn_controller[97581]: 2026-01-23T11:56:31Z|00047|binding|INFO|Setting lport e0cab06b-811c-4fd7-a9ec-dded37a5bfcf ovn-installed in OVS
Jan 23 11:56:31 compute-0 ovn_controller[97581]: 2026-01-23T11:56:31Z|00048|binding|INFO|Setting lport e0cab06b-811c-4fd7-a9ec-dded37a5bfcf up in Southbound
Jan 23 11:56:31 compute-0 openstack_network_exporter[204160]: ERROR   11:56:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 23 11:56:31 compute-0 openstack_network_exporter[204160]: 
Jan 23 11:56:31 compute-0 openstack_network_exporter[204160]: ERROR   11:56:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 23 11:56:31 compute-0 openstack_network_exporter[204160]: 
Jan 23 11:56:31 compute-0 nova_compute[185173]: 2026-01-23 11:56:31.434 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:56:31 compute-0 nova_compute[185173]: 2026-01-23 11:56:31.435 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:56:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:56:31.447 238267 DEBUG oslo.privsep.daemon [-] privsep: reply[ae9393f2-a89f-46e2-9d3e-1ab99731dcfd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 11:56:31 compute-0 systemd-machined[156550]: New machine qemu-4-instance-00000004.
Jan 23 11:56:31 compute-0 systemd[1]: Started Virtual Machine qemu-4-instance-00000004.
Jan 23 11:56:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:56:31.477 238300 DEBUG oslo.privsep.daemon [-] privsep: reply[51a4d300-df8c-49d3-a149-0711716d3f81]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 11:56:31 compute-0 systemd-udevd[242009]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 11:56:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:56:31.483 238300 DEBUG oslo.privsep.daemon [-] privsep: reply[851163fa-d9e7-4d84-a6fe-862ad5fb409d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 11:56:31 compute-0 podman[241979]: 2026-01-23 11:56:31.48511859 +0000 UTC m=+0.105890255 container health_status d96827cd9c29e53bbdf4cef10942608e4ba405294733072b4aa624c0238e2ed8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 23 11:56:31 compute-0 NetworkManager[56133]: <info>  [1769169391.4941] device (tape0cab06b-81): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 23 11:56:31 compute-0 NetworkManager[56133]: <info>  [1769169391.4948] device (tape0cab06b-81): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 23 11:56:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:56:31.514 238300 DEBUG oslo.privsep.daemon [-] privsep: reply[d275db8d-7733-4567-ad9a-e36ef9638eea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 11:56:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:56:31.533 238267 DEBUG oslo.privsep.daemon [-] privsep: reply[d2930d82-9ac1-441d-9b99-edc6ea702298]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9d2c33ef-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5b:a6:26'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 10, 'rx_bytes': 616, 'tx_bytes': 612, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 10, 'rx_bytes': 616, 'tx_bytes': 612, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 374776, 'reachable_time': 37321, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 242017, 'error': None, 'target': 'ovnmeta-9d2c33ef-0f52-43b5-80dd-899657aece53', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 11:56:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:56:31.549 238267 DEBUG oslo.privsep.daemon [-] privsep: reply[29ac11f6-34e2-4096-844f-28b299bbec12]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap9d2c33ef-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 374787, 'tstamp': 374787}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 242020, 'error': None, 'target': 'ovnmeta-9d2c33ef-0f52-43b5-80dd-899657aece53', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap9d2c33ef-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 374789, 'tstamp': 374789}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 242020, 'error': None, 'target': 'ovnmeta-9d2c33ef-0f52-43b5-80dd-899657aece53', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 11:56:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:56:31.551 106832 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9d2c33ef-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 23 11:56:31 compute-0 nova_compute[185173]: 2026-01-23 11:56:31.552 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:56:31 compute-0 nova_compute[185173]: 2026-01-23 11:56:31.554 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:56:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:56:31.554 106832 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9d2c33ef-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 23 11:56:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:56:31.554 106832 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 23 11:56:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:56:31.555 106832 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9d2c33ef-00, col_values=(('external_ids', {'iface-id': 'a3c84d66-2ae2-461a-92f2-b9999c7b469e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 23 11:56:31 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:56:31.555 106832 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 23 11:56:31 compute-0 nova_compute[185173]: 2026-01-23 11:56:31.675 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:56:31 compute-0 nova_compute[185173]: 2026-01-23 11:56:31.701 185177 DEBUG nova.compute.manager [req-22d6417f-2b0b-4e62-a8a9-3d37be904c82 req-f58f61c5-3d82-4b68-959f-198baa29bb87 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] [instance: e9de5be9-383e-4139-a192-9a00ac9030d0] Received event network-vif-plugged-e0cab06b-811c-4fd7-a9ec-dded37a5bfcf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 23 11:56:31 compute-0 nova_compute[185173]: 2026-01-23 11:56:31.702 185177 DEBUG oslo_concurrency.lockutils [req-22d6417f-2b0b-4e62-a8a9-3d37be904c82 req-f58f61c5-3d82-4b68-959f-198baa29bb87 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] Acquiring lock "e9de5be9-383e-4139-a192-9a00ac9030d0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 11:56:31 compute-0 nova_compute[185173]: 2026-01-23 11:56:31.702 185177 DEBUG oslo_concurrency.lockutils [req-22d6417f-2b0b-4e62-a8a9-3d37be904c82 req-f58f61c5-3d82-4b68-959f-198baa29bb87 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] Lock "e9de5be9-383e-4139-a192-9a00ac9030d0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 11:56:31 compute-0 nova_compute[185173]: 2026-01-23 11:56:31.703 185177 DEBUG oslo_concurrency.lockutils [req-22d6417f-2b0b-4e62-a8a9-3d37be904c82 req-f58f61c5-3d82-4b68-959f-198baa29bb87 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] Lock "e9de5be9-383e-4139-a192-9a00ac9030d0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 11:56:31 compute-0 nova_compute[185173]: 2026-01-23 11:56:31.703 185177 DEBUG nova.compute.manager [req-22d6417f-2b0b-4e62-a8a9-3d37be904c82 req-f58f61c5-3d82-4b68-959f-198baa29bb87 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] [instance: e9de5be9-383e-4139-a192-9a00ac9030d0] Processing event network-vif-plugged-e0cab06b-811c-4fd7-a9ec-dded37a5bfcf _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 23 11:56:31 compute-0 nova_compute[185173]: 2026-01-23 11:56:31.887 185177 DEBUG nova.compute.manager [None req-4f818d91-3f9b-4d0b-a9b5-e151893dd14e d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: e9de5be9-383e-4139-a192-9a00ac9030d0] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 23 11:56:31 compute-0 nova_compute[185173]: 2026-01-23 11:56:31.888 185177 DEBUG nova.virt.driver [None req-161a4fc3-746d-453a-ae54-40285a29550b - - - - - -] Emitting event <LifecycleEvent: 1769169391.8880866, e9de5be9-383e-4139-a192-9a00ac9030d0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 23 11:56:31 compute-0 nova_compute[185173]: 2026-01-23 11:56:31.888 185177 INFO nova.compute.manager [None req-161a4fc3-746d-453a-ae54-40285a29550b - - - - - -] [instance: e9de5be9-383e-4139-a192-9a00ac9030d0] VM Started (Lifecycle Event)
Jan 23 11:56:31 compute-0 nova_compute[185173]: 2026-01-23 11:56:31.891 185177 DEBUG nova.virt.libvirt.driver [None req-4f818d91-3f9b-4d0b-a9b5-e151893dd14e d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: e9de5be9-383e-4139-a192-9a00ac9030d0] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 23 11:56:31 compute-0 nova_compute[185173]: 2026-01-23 11:56:31.896 185177 INFO nova.virt.libvirt.driver [-] [instance: e9de5be9-383e-4139-a192-9a00ac9030d0] Instance spawned successfully.
Jan 23 11:56:31 compute-0 nova_compute[185173]: 2026-01-23 11:56:31.896 185177 DEBUG nova.virt.libvirt.driver [None req-4f818d91-3f9b-4d0b-a9b5-e151893dd14e d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: e9de5be9-383e-4139-a192-9a00ac9030d0] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 23 11:56:31 compute-0 nova_compute[185173]: 2026-01-23 11:56:31.912 185177 DEBUG nova.compute.manager [None req-161a4fc3-746d-453a-ae54-40285a29550b - - - - - -] [instance: e9de5be9-383e-4139-a192-9a00ac9030d0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 23 11:56:31 compute-0 nova_compute[185173]: 2026-01-23 11:56:31.919 185177 DEBUG nova.compute.manager [None req-161a4fc3-746d-453a-ae54-40285a29550b - - - - - -] [instance: e9de5be9-383e-4139-a192-9a00ac9030d0] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 23 11:56:31 compute-0 nova_compute[185173]: 2026-01-23 11:56:31.924 185177 DEBUG nova.virt.libvirt.driver [None req-4f818d91-3f9b-4d0b-a9b5-e151893dd14e d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: e9de5be9-383e-4139-a192-9a00ac9030d0] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 23 11:56:31 compute-0 nova_compute[185173]: 2026-01-23 11:56:31.924 185177 DEBUG nova.virt.libvirt.driver [None req-4f818d91-3f9b-4d0b-a9b5-e151893dd14e d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: e9de5be9-383e-4139-a192-9a00ac9030d0] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 23 11:56:31 compute-0 nova_compute[185173]: 2026-01-23 11:56:31.924 185177 DEBUG nova.virt.libvirt.driver [None req-4f818d91-3f9b-4d0b-a9b5-e151893dd14e d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: e9de5be9-383e-4139-a192-9a00ac9030d0] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 23 11:56:31 compute-0 nova_compute[185173]: 2026-01-23 11:56:31.925 185177 DEBUG nova.virt.libvirt.driver [None req-4f818d91-3f9b-4d0b-a9b5-e151893dd14e d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: e9de5be9-383e-4139-a192-9a00ac9030d0] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 23 11:56:31 compute-0 nova_compute[185173]: 2026-01-23 11:56:31.925 185177 DEBUG nova.virt.libvirt.driver [None req-4f818d91-3f9b-4d0b-a9b5-e151893dd14e d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: e9de5be9-383e-4139-a192-9a00ac9030d0] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 23 11:56:31 compute-0 nova_compute[185173]: 2026-01-23 11:56:31.926 185177 DEBUG nova.virt.libvirt.driver [None req-4f818d91-3f9b-4d0b-a9b5-e151893dd14e d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: e9de5be9-383e-4139-a192-9a00ac9030d0] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 23 11:56:31 compute-0 nova_compute[185173]: 2026-01-23 11:56:31.955 185177 INFO nova.compute.manager [None req-161a4fc3-746d-453a-ae54-40285a29550b - - - - - -] [instance: e9de5be9-383e-4139-a192-9a00ac9030d0] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 23 11:56:31 compute-0 nova_compute[185173]: 2026-01-23 11:56:31.955 185177 DEBUG nova.virt.driver [None req-161a4fc3-746d-453a-ae54-40285a29550b - - - - - -] Emitting event <LifecycleEvent: 1769169391.888187, e9de5be9-383e-4139-a192-9a00ac9030d0 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 23 11:56:31 compute-0 nova_compute[185173]: 2026-01-23 11:56:31.955 185177 INFO nova.compute.manager [None req-161a4fc3-746d-453a-ae54-40285a29550b - - - - - -] [instance: e9de5be9-383e-4139-a192-9a00ac9030d0] VM Paused (Lifecycle Event)
Jan 23 11:56:31 compute-0 nova_compute[185173]: 2026-01-23 11:56:31.979 185177 DEBUG nova.compute.manager [None req-161a4fc3-746d-453a-ae54-40285a29550b - - - - - -] [instance: e9de5be9-383e-4139-a192-9a00ac9030d0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 23 11:56:31 compute-0 nova_compute[185173]: 2026-01-23 11:56:31.983 185177 DEBUG nova.virt.driver [None req-161a4fc3-746d-453a-ae54-40285a29550b - - - - - -] Emitting event <LifecycleEvent: 1769169391.8913789, e9de5be9-383e-4139-a192-9a00ac9030d0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 23 11:56:31 compute-0 nova_compute[185173]: 2026-01-23 11:56:31.983 185177 INFO nova.compute.manager [None req-161a4fc3-746d-453a-ae54-40285a29550b - - - - - -] [instance: e9de5be9-383e-4139-a192-9a00ac9030d0] VM Resumed (Lifecycle Event)
Jan 23 11:56:31 compute-0 nova_compute[185173]: 2026-01-23 11:56:31.998 185177 INFO nova.compute.manager [None req-4f818d91-3f9b-4d0b-a9b5-e151893dd14e d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: e9de5be9-383e-4139-a192-9a00ac9030d0] Took 5.85 seconds to spawn the instance on the hypervisor.
Jan 23 11:56:31 compute-0 nova_compute[185173]: 2026-01-23 11:56:31.999 185177 DEBUG nova.compute.manager [None req-4f818d91-3f9b-4d0b-a9b5-e151893dd14e d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: e9de5be9-383e-4139-a192-9a00ac9030d0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 23 11:56:32 compute-0 nova_compute[185173]: 2026-01-23 11:56:32.003 185177 DEBUG nova.compute.manager [None req-161a4fc3-746d-453a-ae54-40285a29550b - - - - - -] [instance: e9de5be9-383e-4139-a192-9a00ac9030d0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 23 11:56:32 compute-0 nova_compute[185173]: 2026-01-23 11:56:32.010 185177 DEBUG nova.compute.manager [None req-161a4fc3-746d-453a-ae54-40285a29550b - - - - - -] [instance: e9de5be9-383e-4139-a192-9a00ac9030d0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 23 11:56:32 compute-0 nova_compute[185173]: 2026-01-23 11:56:32.053 185177 INFO nova.compute.manager [None req-161a4fc3-746d-453a-ae54-40285a29550b - - - - - -] [instance: e9de5be9-383e-4139-a192-9a00ac9030d0] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 23 11:56:32 compute-0 nova_compute[185173]: 2026-01-23 11:56:32.084 185177 INFO nova.compute.manager [None req-4f818d91-3f9b-4d0b-a9b5-e151893dd14e d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: e9de5be9-383e-4139-a192-9a00ac9030d0] Took 6.38 seconds to build instance.
Jan 23 11:56:32 compute-0 nova_compute[185173]: 2026-01-23 11:56:32.111 185177 DEBUG oslo_concurrency.lockutils [None req-4f818d91-3f9b-4d0b-a9b5-e151893dd14e d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Lock "e9de5be9-383e-4139-a192-9a00ac9030d0" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.491s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 11:56:32 compute-0 nova_compute[185173]: 2026-01-23 11:56:32.279 185177 DEBUG nova.network.neutron [req-c7d1d2cf-077e-4926-8e7d-7520489a1c27 req-9c523947-fec8-4b71-992f-7d9f6b8a62d7 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] [instance: e9de5be9-383e-4139-a192-9a00ac9030d0] Updated VIF entry in instance network info cache for port e0cab06b-811c-4fd7-a9ec-dded37a5bfcf. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 23 11:56:32 compute-0 nova_compute[185173]: 2026-01-23 11:56:32.280 185177 DEBUG nova.network.neutron [req-c7d1d2cf-077e-4926-8e7d-7520489a1c27 req-9c523947-fec8-4b71-992f-7d9f6b8a62d7 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] [instance: e9de5be9-383e-4139-a192-9a00ac9030d0] Updating instance_info_cache with network_info: [{"id": "e0cab06b-811c-4fd7-a9ec-dded37a5bfcf", "address": "fa:16:3e:c3:4d:2b", "network": {"id": "9d2c33ef-0f52-43b5-80dd-899657aece53", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.35", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bd16a0de2f5e4a8480a855ef0e1a3f14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0cab06b-81", "ovs_interfaceid": "e0cab06b-811c-4fd7-a9ec-dded37a5bfcf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 23 11:56:32 compute-0 nova_compute[185173]: 2026-01-23 11:56:32.294 185177 DEBUG oslo_concurrency.lockutils [req-c7d1d2cf-077e-4926-8e7d-7520489a1c27 req-9c523947-fec8-4b71-992f-7d9f6b8a62d7 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] Releasing lock "refresh_cache-e9de5be9-383e-4139-a192-9a00ac9030d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 23 11:56:32 compute-0 systemd[1]: Starting libvirt proxy daemon...
Jan 23 11:56:32 compute-0 systemd[1]: Started libvirt proxy daemon.
Jan 23 11:56:33 compute-0 podman[242049]: 2026-01-23 11:56:33.776671523 +0000 UTC m=+0.107467293 container health_status 1cc877fed4914980324cf4c0d6ba23743fd113442cee4d49cc1a59e402757170 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 11:56:33 compute-0 nova_compute[185173]: 2026-01-23 11:56:33.778 185177 DEBUG nova.compute.manager [req-ebad1edf-c796-4d86-b4cf-a4b456c9f593 req-34627540-661e-4221-a07d-986611dde935 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] [instance: e9de5be9-383e-4139-a192-9a00ac9030d0] Received event network-vif-plugged-e0cab06b-811c-4fd7-a9ec-dded37a5bfcf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 23 11:56:33 compute-0 nova_compute[185173]: 2026-01-23 11:56:33.778 185177 DEBUG oslo_concurrency.lockutils [req-ebad1edf-c796-4d86-b4cf-a4b456c9f593 req-34627540-661e-4221-a07d-986611dde935 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] Acquiring lock "e9de5be9-383e-4139-a192-9a00ac9030d0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 11:56:33 compute-0 nova_compute[185173]: 2026-01-23 11:56:33.779 185177 DEBUG oslo_concurrency.lockutils [req-ebad1edf-c796-4d86-b4cf-a4b456c9f593 req-34627540-661e-4221-a07d-986611dde935 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] Lock "e9de5be9-383e-4139-a192-9a00ac9030d0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 11:56:33 compute-0 nova_compute[185173]: 2026-01-23 11:56:33.779 185177 DEBUG oslo_concurrency.lockutils [req-ebad1edf-c796-4d86-b4cf-a4b456c9f593 req-34627540-661e-4221-a07d-986611dde935 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] Lock "e9de5be9-383e-4139-a192-9a00ac9030d0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 11:56:33 compute-0 nova_compute[185173]: 2026-01-23 11:56:33.779 185177 DEBUG nova.compute.manager [req-ebad1edf-c796-4d86-b4cf-a4b456c9f593 req-34627540-661e-4221-a07d-986611dde935 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] [instance: e9de5be9-383e-4139-a192-9a00ac9030d0] No waiting events found dispatching network-vif-plugged-e0cab06b-811c-4fd7-a9ec-dded37a5bfcf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 23 11:56:33 compute-0 nova_compute[185173]: 2026-01-23 11:56:33.779 185177 WARNING nova.compute.manager [req-ebad1edf-c796-4d86-b4cf-a4b456c9f593 req-34627540-661e-4221-a07d-986611dde935 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] [instance: e9de5be9-383e-4139-a192-9a00ac9030d0] Received unexpected event network-vif-plugged-e0cab06b-811c-4fd7-a9ec-dded37a5bfcf for instance with vm_state active and task_state None.
Jan 23 11:56:35 compute-0 nova_compute[185173]: 2026-01-23 11:56:35.344 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:56:36 compute-0 nova_compute[185173]: 2026-01-23 11:56:36.678 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:56:39 compute-0 podman[242075]: 2026-01-23 11:56:39.774001298 +0000 UTC m=+0.097768701 container health_status adf529ba1b6aae11f18bcfacdd7f5850af0b6e6af2250d4a705be9c346f3f5af (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ceilometer_agent_ipmi)
Jan 23 11:56:40 compute-0 nova_compute[185173]: 2026-01-23 11:56:40.348 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:56:41 compute-0 nova_compute[185173]: 2026-01-23 11:56:41.680 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:56:42 compute-0 podman[242094]: 2026-01-23 11:56:42.7479112 +0000 UTC m=+0.081882549 container health_status 900ef841977ab427bb05b895d10e0cac749b9185cccc7bb7aaf2b3886aa6449a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, build-date=2024-09-18T21:23:30, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, architecture=x86_64, config_id=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., name=ubi9, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, release-0.7.12=, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0)
Jan 23 11:56:45 compute-0 nova_compute[185173]: 2026-01-23 11:56:45.354 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:56:45 compute-0 podman[242114]: 2026-01-23 11:56:45.449904508 +0000 UTC m=+0.062501083 container health_status 99ee297e6e25b500e7af118e58bbafc761d2fd7202cdfcf4c976c2a99866b5ef (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 23 11:56:46 compute-0 nova_compute[185173]: 2026-01-23 11:56:46.681 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:56:50 compute-0 nova_compute[185173]: 2026-01-23 11:56:50.357 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:56:51 compute-0 nova_compute[185173]: 2026-01-23 11:56:51.684 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:56:51 compute-0 podman[242138]: 2026-01-23 11:56:51.728920472 +0000 UTC m=+0.064856999 container health_status cde20f10ae383cce1365a41265bac0a75ea71c31a21a1539f187bef9d678e8d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=openstack_network_exporter, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., io.openshift.expose-services=, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, distribution-scope=public, name=ubi9-minimal)
Jan 23 11:56:55 compute-0 nova_compute[185173]: 2026-01-23 11:56:55.360 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:56:56 compute-0 nova_compute[185173]: 2026-01-23 11:56:56.687 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:56:59 compute-0 podman[201022]: time="2026-01-23T11:56:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 23 11:56:59 compute-0 podman[201022]: @ - - [23/Jan/2026:11:56:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28508 "" "Go-http-client/1.1"
Jan 23 11:56:59 compute-0 podman[201022]: @ - - [23/Jan/2026:11:56:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4374 "" "Go-http-client/1.1"
Jan 23 11:57:00 compute-0 nova_compute[185173]: 2026-01-23 11:57:00.366 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:57:00 compute-0 podman[242159]: 2026-01-23 11:57:00.756671695 +0000 UTC m=+0.079530177 container health_status 6ec039018dddd109dd56b3f3912ce4a80c166b5fb98c417c5e3cfbbdfbfbeaad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, tcib_managed=true, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20260120, config_id=ceilometer_agent_compute)
Jan 23 11:57:00 compute-0 podman[242158]: 2026-01-23 11:57:00.776816855 +0000 UTC m=+0.100030345 container health_status 48bfd3e93cfb033a8917f154ab637a84f3f60f7609564292c230ce848bae7693 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 23 11:57:01 compute-0 openstack_network_exporter[204160]: ERROR   11:57:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 23 11:57:01 compute-0 openstack_network_exporter[204160]: 
Jan 23 11:57:01 compute-0 openstack_network_exporter[204160]: ERROR   11:57:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 23 11:57:01 compute-0 openstack_network_exporter[204160]: 
Jan 23 11:57:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:01.453 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 23 11:57:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:01.455 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 23 11:57:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:01.455 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283bb610a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:57:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:01.455 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f28410bc7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:57:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:01.456 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be810>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283bb610a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:57:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:01.456 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be840>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283bb610a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:57:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:01.456 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283bb610a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:57:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:01.457 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be8a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283bb610a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:57:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:01.457 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc8f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283bb610a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:57:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:01.457 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be900>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283bb610a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:57:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:01.457 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bf140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283bb610a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:57:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:01.457 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be960>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283bb610a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:57:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:01.457 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2842f61190>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283bb610a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:57:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:01.457 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28411c9190>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283bb610a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:57:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:01.457 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be9c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283bb610a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:57:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:01.457 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bf1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283bb610a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:57:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:01.458 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bec00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283bb610a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:57:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:01.458 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bf440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283bb610a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:57:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:01.458 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bec60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283bb610a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:57:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:01.458 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2842f83560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283bb610a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:57:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:01.458 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283bb610a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:57:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:01.458 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc5c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283bb610a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:57:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:01.458 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283bb610a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:57:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:01.458 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be660>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283bb610a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:57:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:01.458 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283bb610a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:57:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:01.458 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283bb610a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:57:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:01.458 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2842f1af60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283bb610a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:57:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:01.459 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283bb610a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:57:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:01.459 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be7b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283bb610a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:57:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:01.461 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '55846fbf-a87a-4cba-be0b-23125d3d9ef4', 'name': 'test_0', 'flavor': {'id': 'f2c5c5dd-a580-4885-a3ab-a766eac401c8', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'c5833e41-b4db-454e-8f49-014aa18c7dc5'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'bd16a0de2f5e4a8480a855ef0e1a3f14', 'user_id': 'd9858533c2284846a8f0f19a1fb45045', 'hostId': '47f89b8956aaa9163f724166aabd4216eadbb2bd951d24f4c87e1ecb', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 23 11:57:01 compute-0 ovn_controller[97581]: 2026-01-23T11:57:01Z|00049|memory_trim|INFO|Detected inactivity (last active 30000 ms ago): trimming memory
Jan 23 11:57:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:01.465 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '84b3f69a-6ab7-406d-939b-a485518755a5', 'name': 'vn-i4gqh4k-vr2au76lt4jq-fptc6vwdy3ol-vnf-bciscawcuiyk', 'flavor': {'id': 'f2c5c5dd-a580-4885-a3ab-a766eac401c8', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'c5833e41-b4db-454e-8f49-014aa18c7dc5'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'bd16a0de2f5e4a8480a855ef0e1a3f14', 'user_id': 'd9858533c2284846a8f0f19a1fb45045', 'hostId': '47f89b8956aaa9163f724166aabd4216eadbb2bd951d24f4c87e1ecb', 'status': 'active', 'metadata': {'metering.server_group': '500baa09-1e39-474e-b275-8b2dffe3a65b'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 23 11:57:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:01.468 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e', 'name': 'vn-i4gqh4k-nwnahxa6hq2y-lqyj7kfebyqq-vnf-dcwk4osqlplv', 'flavor': {'id': 'f2c5c5dd-a580-4885-a3ab-a766eac401c8', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'c5833e41-b4db-454e-8f49-014aa18c7dc5'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000003', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'bd16a0de2f5e4a8480a855ef0e1a3f14', 'user_id': 'd9858533c2284846a8f0f19a1fb45045', 'hostId': '47f89b8956aaa9163f724166aabd4216eadbb2bd951d24f4c87e1ecb', 'status': 'active', 'metadata': {'metering.server_group': '500baa09-1e39-474e-b275-8b2dffe3a65b'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 23 11:57:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:01.470 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance e9de5be9-383e-4139-a192-9a00ac9030d0 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Jan 23 11:57:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:01.471 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/e9de5be9-383e-4139-a192-9a00ac9030d0 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}ad70b57d9194f6532b182b578b16289681d355eb6a1afd27a70859dd1387cbc9" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Jan 23 11:57:01 compute-0 nova_compute[185173]: 2026-01-23 11:57:01.689 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:57:01 compute-0 podman[242210]: 2026-01-23 11:57:01.743500906 +0000 UTC m=+0.071250576 container health_status d96827cd9c29e53bbdf4cef10942608e4ba405294733072b4aa624c0238e2ed8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 23 11:57:01 compute-0 ovn_controller[97581]: 2026-01-23T11:57:01Z|00010|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c3:4d:2b 192.168.0.35
Jan 23 11:57:01 compute-0 ovn_controller[97581]: 2026-01-23T11:57:01Z|00011|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c3:4d:2b 192.168.0.35
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.240 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1959 Content-Type: application/json Date: Fri, 23 Jan 2026 11:57:01 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-2ea3db57-c3d2-4f65-a864-ab745b3acd79 x-openstack-request-id: req-2ea3db57-c3d2-4f65-a864-ab745b3acd79 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.240 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "e9de5be9-383e-4139-a192-9a00ac9030d0", "name": "vn-i4gqh4k-b64ilmmiw3co-dxxhdi3z36fs-vnf-e3wngllyc55g", "status": "ACTIVE", "tenant_id": "bd16a0de2f5e4a8480a855ef0e1a3f14", "user_id": "d9858533c2284846a8f0f19a1fb45045", "metadata": {"metering.server_group": "500baa09-1e39-474e-b275-8b2dffe3a65b"}, "hostId": "47f89b8956aaa9163f724166aabd4216eadbb2bd951d24f4c87e1ecb", "image": {"id": "c5833e41-b4db-454e-8f49-014aa18c7dc5", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/c5833e41-b4db-454e-8f49-014aa18c7dc5"}]}, "flavor": {"id": "f2c5c5dd-a580-4885-a3ab-a766eac401c8", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/f2c5c5dd-a580-4885-a3ab-a766eac401c8"}]}, "created": "2026-01-23T11:56:23Z", "updated": "2026-01-23T11:56:32Z", "addresses": {"private": [{"version": 4, "addr": "192.168.0.35", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:c3:4d:2b"}, {"version": 4, "addr": "192.168.122.210", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:c3:4d:2b"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/e9de5be9-383e-4139-a192-9a00ac9030d0"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/e9de5be9-383e-4139-a192-9a00ac9030d0"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2026-01-23T11:56:32.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "basic"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000004", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.240 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/e9de5be9-383e-4139-a192-9a00ac9030d0 used request id req-2ea3db57-c3d2-4f65-a864-ab745b3acd79 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.241 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'e9de5be9-383e-4139-a192-9a00ac9030d0', 'name': 'vn-i4gqh4k-b64ilmmiw3co-dxxhdi3z36fs-vnf-e3wngllyc55g', 'flavor': {'id': 'f2c5c5dd-a580-4885-a3ab-a766eac401c8', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'c5833e41-b4db-454e-8f49-014aa18c7dc5'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'bd16a0de2f5e4a8480a855ef0e1a3f14', 'user_id': 'd9858533c2284846a8f0f19a1fb45045', 'hostId': '47f89b8956aaa9163f724166aabd4216eadbb2bd951d24f4c87e1ecb', 'status': 'active', 'metadata': {'metering.server_group': '500baa09-1e39-474e-b275-8b2dffe3a65b'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.242 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.242 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bc800>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.242 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bc800>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.242 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.243 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-01-23T11:57:02.242404) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.247 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.252 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/network.outgoing.bytes.delta volume: 2742 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.257 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/network.outgoing.bytes.delta volume: 550 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.260 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for e9de5be9-383e-4139-a192-9a00ac9030d0 / tape0cab06b-81 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.261 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.261 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.261 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f28410be7e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.262 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.262 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410be810>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.262 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410be810>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.262 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.263 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-01-23T11:57:02.262608) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.283 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.283 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.284 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.306 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.usage volume: 21364736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.306 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.307 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.325 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.326 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.326 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.345 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/disk.device.usage volume: 20119552 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.345 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.346 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.346 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.346 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f28411c9b80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.347 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.347 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410be840>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.347 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410be840>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.347 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.348 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-01-23T11:57:02.347374) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.404 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.404 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.405 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.464 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.write.bytes volume: 41848832 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.465 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.465 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.526 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.527 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.527 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.594 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/disk.device.write.bytes volume: 41619456 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.595 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.595 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.596 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.596 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f28410bc830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.596 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.596 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bc860>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.596 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bc860>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.596 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.597 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.597 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: vn-i4gqh4k-b64ilmmiw3co-dxxhdi3z36fs-vnf-e3wngllyc55g>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-i4gqh4k-b64ilmmiw3co-dxxhdi3z36fs-vnf-e3wngllyc55g>]
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.597 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2026-01-23T11:57:02.596907) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.597 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f28410be870>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.597 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.597 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410be8a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.597 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410be8a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.598 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.598 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.write.latency volume: 1669208630 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.598 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.write.latency volume: 8106790 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.598 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.598 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-01-23T11:57:02.598008) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.598 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.write.latency volume: 803215933 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.599 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.write.latency volume: 8862519 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.599 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.599 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.device.write.latency volume: 1850558272 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.599 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.device.write.latency volume: 8667328 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.599 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.600 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/disk.device.write.latency volume: 585404560 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.600 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/disk.device.write.latency volume: 7490744 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.600 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.601 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.601 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f28410bc8c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.601 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.601 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bc8f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.601 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bc8f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.601 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.601 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-01-23T11:57:02.601439) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.601 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.601 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.602 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.602 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.603 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.603 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f28410be8d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.603 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.603 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410be900>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.603 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410be900>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.603 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.603 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.write.requests volume: 234 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.603 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-01-23T11:57:02.603536) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.603 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.604 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.604 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.write.requests volume: 241 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.604 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.604 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.605 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.device.write.requests volume: 234 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.605 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.605 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.605 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/disk.device.write.requests volume: 218 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.605 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.606 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.606 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.606 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f28410bef30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.606 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.606 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bf140>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.606 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bf140>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.606 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.607 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.607 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.607 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.607 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.608 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.608 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f28410be930>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.608 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.608 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410be960>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.608 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410be960>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.608 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-01-23T11:57:02.606911) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.608 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.609 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.609 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f28410be750>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.609 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-01-23T11:57:02.608900) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.609 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.610 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f2842f61190>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.610 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f2842f61190>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.610 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.610 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.read.latency volume: 639933059 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.610 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.read.latency volume: 72530295 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.610 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-01-23T11:57:02.610187) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.610 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.read.latency volume: 43879093 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.611 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.read.latency volume: 363540160 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.611 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.read.latency volume: 61167194 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.611 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.read.latency volume: 48392812 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.611 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.device.read.latency volume: 374273377 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.612 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.device.read.latency volume: 71332104 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.612 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.device.read.latency volume: 53834488 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.612 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/disk.device.read.latency volume: 316698125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.612 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/disk.device.read.latency volume: 48405117 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.612 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/disk.device.read.latency volume: 41807708 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.613 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.613 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f28411a4c50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.613 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.613 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28411c9190>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.613 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28411c9190>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.613 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.614 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.614 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.614 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.614 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-01-23T11:57:02.613931) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.614 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.615 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.615 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.615 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.615 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.615 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.616 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/disk.device.allocation volume: 20258816 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.616 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.616 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.617 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.617 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f28410be990>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.617 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.617 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410be9c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.617 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410be9c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.617 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.618 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.618 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f28410bf1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.618 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.618 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bf1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.618 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bf1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.618 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-01-23T11:57:02.617501) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.618 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.618 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-01-23T11:57:02.618741) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.618 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.619 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.619 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.619 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.619 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.620 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f28410bebd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.620 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.620 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bec00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.620 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bec00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.620 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.620 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-01-23T11:57:02.620544) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.648 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/memory.usage volume: 48.79296875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.670 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/memory.usage volume: 49.0390625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.693 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/memory.usage volume: 49.07421875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.716 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/memory.usage volume: 33.19140625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.717 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.717 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f28410bf410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.717 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.717 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bf440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.717 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bf440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.718 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.718 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.718 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.718 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-01-23T11:57:02.718046) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.718 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.719 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.719 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.719 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f28410bec30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.719 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.720 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bec60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.720 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bec60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.720 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.720 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/network.incoming.bytes volume: 2136 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.720 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-01-23T11:57:02.720250) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.720 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/network.incoming.bytes volume: 8406 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.721 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/network.incoming.bytes volume: 1570 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.721 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/network.incoming.bytes volume: 1388 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.721 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.722 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f28410bcfb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.722 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.722 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f2842f83560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.722 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f2842f83560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.722 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.722 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/cpu volume: 38890000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.722 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-01-23T11:57:02.722555) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.723 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/cpu volume: 402290000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.723 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/cpu volume: 33530000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.723 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/cpu volume: 29410000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.724 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.724 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f28410bc920>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.724 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.724 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bc590>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.724 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bc590>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.724 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.724 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.725 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: vn-i4gqh4k-b64ilmmiw3co-dxxhdi3z36fs-vnf-e3wngllyc55g>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-i4gqh4k-b64ilmmiw3co-dxxhdi3z36fs-vnf-e3wngllyc55g>]
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.725 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f28410bc5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.725 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.725 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bc5c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.725 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bc5c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.725 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2026-01-23T11:57:02.724691) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.726 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.726 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/network.incoming.packets volume: 21 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.726 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-01-23T11:57:02.726073) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.726 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/network.incoming.packets volume: 55 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.726 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/network.incoming.packets volume: 14 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.727 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/network.incoming.packets volume: 11 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.727 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.727 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f28410bc890>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.727 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.728 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bc650>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.728 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bc650>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.728 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.728 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.728 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-01-23T11:57:02.728270) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.728 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.729 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.729 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.729 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.730 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f28410be720>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.730 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.730 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410be660>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.730 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410be660>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.730 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.730 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.730 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-01-23T11:57:02.730506) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.731 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.731 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.731 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.731 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.732 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.732 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.732 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.732 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.733 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/disk.device.read.bytes volume: 22674432 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.733 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/disk.device.read.bytes volume: 2160128 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.733 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/disk.device.read.bytes volume: 328014 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.734 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.734 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f28410bc6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.734 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.734 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bc680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.734 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bc680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.735 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.735 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-01-23T11:57:02.734998) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.735 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.735 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/network.outgoing.packets volume: 69 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.735 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/network.outgoing.packets volume: 20 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.736 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/network.outgoing.packets volume: 6 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.736 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.736 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f28410bec90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.736 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.736 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bc6e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.737 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bc6e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.737 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.737 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.737 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-01-23T11:57:02.737156) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.737 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/network.incoming.bytes.delta volume: 3431 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.738 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.738 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.738 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.739 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f284322b260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.739 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.739 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f2842f1af60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.739 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f2842f1af60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.739 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.739 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-01-23T11:57:02.739541) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.739 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.740 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.740 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.740 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.741 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.741 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.741 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.741 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.742 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.742 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.742 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.742 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.743 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.743 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f28410bc740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.743 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.744 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bc770>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.744 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bc770>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.744 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.744 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/network.outgoing.bytes volume: 2272 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.744 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/network.outgoing.bytes volume: 7746 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.745 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/network.outgoing.bytes volume: 2216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.745 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-01-23T11:57:02.744177) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.745 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/network.outgoing.bytes volume: 1073 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.745 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.745 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f28410be780>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.746 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.746 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410be7b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.746 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410be7b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.746 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.746 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.746 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.747 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.747 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.747 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-01-23T11:57:02.746363) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.747 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.748 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.748 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.748 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.748 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.749 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/disk.device.read.requests volume: 812 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.749 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/disk.device.read.requests volume: 114 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.749 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.750 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.750 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.750 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.750 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.751 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.751 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.751 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.751 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.751 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.751 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.751 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.751 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.751 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.751 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.751 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.751 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.752 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.752 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.752 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.752 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.752 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.752 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.752 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.752 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.752 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.752 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:57:02 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:57:02.752 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:57:04 compute-0 podman[242229]: 2026-01-23 11:57:04.767408343 +0000 UTC m=+0.103419388 container health_status 1cc877fed4914980324cf4c0d6ba23743fd113442cee4d49cc1a59e402757170 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 23 11:57:05 compute-0 nova_compute[185173]: 2026-01-23 11:57:05.369 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:57:06 compute-0 nova_compute[185173]: 2026-01-23 11:57:06.691 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:57:10 compute-0 nova_compute[185173]: 2026-01-23 11:57:10.372 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:57:10 compute-0 podman[242255]: 2026-01-23 11:57:10.726119242 +0000 UTC m=+0.059735985 container health_status adf529ba1b6aae11f18bcfacdd7f5850af0b6e6af2250d4a705be9c346f3f5af (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ceilometer_agent_ipmi, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 23 11:57:11 compute-0 nova_compute[185173]: 2026-01-23 11:57:11.693 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:57:12 compute-0 nova_compute[185173]: 2026-01-23 11:57:12.235 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:57:13 compute-0 podman[242275]: 2026-01-23 11:57:13.764364092 +0000 UTC m=+0.098655804 container health_status 900ef841977ab427bb05b895d10e0cac749b9185cccc7bb7aaf2b3886aa6449a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, build-date=2024-09-18T21:23:30, io.openshift.expose-services=, maintainer=Red Hat, Inc., config_id=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, version=9.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, managed_by=edpm_ansible, name=ubi9)
Jan 23 11:57:15 compute-0 nova_compute[185173]: 2026-01-23 11:57:15.374 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:57:15 compute-0 podman[242294]: 2026-01-23 11:57:15.748048404 +0000 UTC m=+0.076192495 container health_status 99ee297e6e25b500e7af118e58bbafc761d2fd7202cdfcf4c976c2a99866b5ef (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 23 11:57:16 compute-0 nova_compute[185173]: 2026-01-23 11:57:16.235 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:57:16 compute-0 nova_compute[185173]: 2026-01-23 11:57:16.274 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 11:57:16 compute-0 nova_compute[185173]: 2026-01-23 11:57:16.275 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 11:57:16 compute-0 nova_compute[185173]: 2026-01-23 11:57:16.276 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 11:57:16 compute-0 nova_compute[185173]: 2026-01-23 11:57:16.276 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 23 11:57:16 compute-0 nova_compute[185173]: 2026-01-23 11:57:16.373 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:57:16 compute-0 nova_compute[185173]: 2026-01-23 11:57:16.439 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:57:16 compute-0 nova_compute[185173]: 2026-01-23 11:57:16.440 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:57:16 compute-0 nova_compute[185173]: 2026-01-23 11:57:16.502 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:57:16 compute-0 nova_compute[185173]: 2026-01-23 11:57:16.504 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:57:16 compute-0 nova_compute[185173]: 2026-01-23 11:57:16.565 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.eph0 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:57:16 compute-0 nova_compute[185173]: 2026-01-23 11:57:16.566 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:57:16 compute-0 nova_compute[185173]: 2026-01-23 11:57:16.627 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.eph0 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:57:16 compute-0 nova_compute[185173]: 2026-01-23 11:57:16.634 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/84b3f69a-6ab7-406d-939b-a485518755a5/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:57:16 compute-0 nova_compute[185173]: 2026-01-23 11:57:16.692 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/84b3f69a-6ab7-406d-939b-a485518755a5/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:57:16 compute-0 nova_compute[185173]: 2026-01-23 11:57:16.693 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/84b3f69a-6ab7-406d-939b-a485518755a5/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:57:16 compute-0 nova_compute[185173]: 2026-01-23 11:57:16.709 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:57:16 compute-0 nova_compute[185173]: 2026-01-23 11:57:16.750 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/84b3f69a-6ab7-406d-939b-a485518755a5/disk --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:57:16 compute-0 nova_compute[185173]: 2026-01-23 11:57:16.751 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/84b3f69a-6ab7-406d-939b-a485518755a5/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:57:16 compute-0 nova_compute[185173]: 2026-01-23 11:57:16.812 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/84b3f69a-6ab7-406d-939b-a485518755a5/disk.eph0 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:57:16 compute-0 nova_compute[185173]: 2026-01-23 11:57:16.815 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/84b3f69a-6ab7-406d-939b-a485518755a5/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:57:16 compute-0 nova_compute[185173]: 2026-01-23 11:57:16.878 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/84b3f69a-6ab7-406d-939b-a485518755a5/disk.eph0 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:57:16 compute-0 nova_compute[185173]: 2026-01-23 11:57:16.885 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:57:16 compute-0 nova_compute[185173]: 2026-01-23 11:57:16.942 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:57:16 compute-0 nova_compute[185173]: 2026-01-23 11:57:16.943 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:57:17 compute-0 nova_compute[185173]: 2026-01-23 11:57:17.010 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:57:17 compute-0 nova_compute[185173]: 2026-01-23 11:57:17.011 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:57:17 compute-0 nova_compute[185173]: 2026-01-23 11:57:17.075 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.eph0 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:57:17 compute-0 nova_compute[185173]: 2026-01-23 11:57:17.077 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:57:17 compute-0 nova_compute[185173]: 2026-01-23 11:57:17.137 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.eph0 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:57:17 compute-0 nova_compute[185173]: 2026-01-23 11:57:17.144 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e9de5be9-383e-4139-a192-9a00ac9030d0/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:57:17 compute-0 nova_compute[185173]: 2026-01-23 11:57:17.205 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e9de5be9-383e-4139-a192-9a00ac9030d0/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:57:17 compute-0 nova_compute[185173]: 2026-01-23 11:57:17.207 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e9de5be9-383e-4139-a192-9a00ac9030d0/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:57:17 compute-0 nova_compute[185173]: 2026-01-23 11:57:17.265 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e9de5be9-383e-4139-a192-9a00ac9030d0/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:57:17 compute-0 nova_compute[185173]: 2026-01-23 11:57:17.267 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e9de5be9-383e-4139-a192-9a00ac9030d0/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:57:17 compute-0 nova_compute[185173]: 2026-01-23 11:57:17.323 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e9de5be9-383e-4139-a192-9a00ac9030d0/disk.eph0 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:57:17 compute-0 nova_compute[185173]: 2026-01-23 11:57:17.325 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e9de5be9-383e-4139-a192-9a00ac9030d0/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:57:17 compute-0 nova_compute[185173]: 2026-01-23 11:57:17.381 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e9de5be9-383e-4139-a192-9a00ac9030d0/disk.eph0 --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:57:17 compute-0 nova_compute[185173]: 2026-01-23 11:57:17.708 185177 WARNING nova.virt.libvirt.driver [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 23 11:57:17 compute-0 nova_compute[185173]: 2026-01-23 11:57:17.709 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4520MB free_disk=72.35544967651367GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 23 11:57:17 compute-0 nova_compute[185173]: 2026-01-23 11:57:17.709 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 11:57:17 compute-0 nova_compute[185173]: 2026-01-23 11:57:17.710 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 11:57:17 compute-0 nova_compute[185173]: 2026-01-23 11:57:17.934 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Instance 55846fbf-a87a-4cba-be0b-23125d3d9ef4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 23 11:57:17 compute-0 nova_compute[185173]: 2026-01-23 11:57:17.934 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Instance 84b3f69a-6ab7-406d-939b-a485518755a5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 23 11:57:17 compute-0 nova_compute[185173]: 2026-01-23 11:57:17.934 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Instance ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 23 11:57:17 compute-0 nova_compute[185173]: 2026-01-23 11:57:17.935 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Instance e9de5be9-383e-4139-a192-9a00ac9030d0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 23 11:57:17 compute-0 nova_compute[185173]: 2026-01-23 11:57:17.935 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 23 11:57:17 compute-0 nova_compute[185173]: 2026-01-23 11:57:17.935 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2560MB phys_disk=79GB used_disk=8GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 23 11:57:17 compute-0 nova_compute[185173]: 2026-01-23 11:57:17.961 185177 DEBUG nova.scheduler.client.report [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Refreshing inventories for resource provider 77dd020c-2f5c-40b0-b660-8a95a28aabbd _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 23 11:57:17 compute-0 nova_compute[185173]: 2026-01-23 11:57:17.982 185177 DEBUG nova.scheduler.client.report [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Updating ProviderTree inventory for provider 77dd020c-2f5c-40b0-b660-8a95a28aabbd from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 23 11:57:17 compute-0 nova_compute[185173]: 2026-01-23 11:57:17.983 185177 DEBUG nova.compute.provider_tree [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Updating inventory in ProviderTree for provider 77dd020c-2f5c-40b0-b660-8a95a28aabbd with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 23 11:57:18 compute-0 nova_compute[185173]: 2026-01-23 11:57:18.016 185177 DEBUG nova.scheduler.client.report [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Refreshing aggregate associations for resource provider 77dd020c-2f5c-40b0-b660-8a95a28aabbd, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 23 11:57:18 compute-0 nova_compute[185173]: 2026-01-23 11:57:18.063 185177 DEBUG nova.scheduler.client.report [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Refreshing trait associations for resource provider 77dd020c-2f5c-40b0-b660-8a95a28aabbd, traits: HW_CPU_X86_F16C,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_IDE,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_TRUSTED_CERTS,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_BMI,HW_CPU_X86_FMA3,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE42,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_ABM,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_AVX2,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_AVX,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SSE,HW_CPU_X86_BMI2,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE4A,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_MMX,HW_CPU_X86_SSE41,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_USB _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 23 11:57:18 compute-0 nova_compute[185173]: 2026-01-23 11:57:18.168 185177 DEBUG nova.compute.provider_tree [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Inventory has not changed in ProviderTree for provider: 77dd020c-2f5c-40b0-b660-8a95a28aabbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 23 11:57:18 compute-0 nova_compute[185173]: 2026-01-23 11:57:18.190 185177 DEBUG nova.scheduler.client.report [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Inventory has not changed for provider 77dd020c-2f5c-40b0-b660-8a95a28aabbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 23 11:57:18 compute-0 nova_compute[185173]: 2026-01-23 11:57:18.217 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 23 11:57:18 compute-0 nova_compute[185173]: 2026-01-23 11:57:18.217 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.507s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 11:57:19 compute-0 nova_compute[185173]: 2026-01-23 11:57:19.212 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:57:19 compute-0 nova_compute[185173]: 2026-01-23 11:57:19.213 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:57:19 compute-0 nova_compute[185173]: 2026-01-23 11:57:19.213 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:57:19 compute-0 nova_compute[185173]: 2026-01-23 11:57:19.213 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 23 11:57:20 compute-0 nova_compute[185173]: 2026-01-23 11:57:20.235 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:57:20 compute-0 nova_compute[185173]: 2026-01-23 11:57:20.236 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 23 11:57:20 compute-0 nova_compute[185173]: 2026-01-23 11:57:20.377 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:57:21 compute-0 nova_compute[185173]: 2026-01-23 11:57:21.137 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Acquiring lock "refresh_cache-ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 23 11:57:21 compute-0 nova_compute[185173]: 2026-01-23 11:57:21.137 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Acquired lock "refresh_cache-ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 23 11:57:21 compute-0 nova_compute[185173]: 2026-01-23 11:57:21.137 185177 DEBUG nova.network.neutron [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] [instance: ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 23 11:57:21 compute-0 nova_compute[185173]: 2026-01-23 11:57:21.699 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:57:22 compute-0 podman[242368]: 2026-01-23 11:57:22.737376551 +0000 UTC m=+0.068944856 container health_status cde20f10ae383cce1365a41265bac0a75ea71c31a21a1539f187bef9d678e8d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, release=1755695350, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, config_id=openstack_network_exporter, version=9.6, com.redhat.component=ubi9-minimal-container, architecture=x86_64, distribution-scope=public, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, name=ubi9-minimal)
Jan 23 11:57:23 compute-0 nova_compute[185173]: 2026-01-23 11:57:23.560 185177 DEBUG nova.network.neutron [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] [instance: ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e] Updating instance_info_cache with network_info: [{"id": "b9b63bb2-5fc6-48b1-8945-ac43ce6e954e", "address": "fa:16:3e:fa:bc:bc", "network": {"id": "9d2c33ef-0f52-43b5-80dd-899657aece53", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.99", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bd16a0de2f5e4a8480a855ef0e1a3f14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9b63bb2-5f", "ovs_interfaceid": "b9b63bb2-5fc6-48b1-8945-ac43ce6e954e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 23 11:57:23 compute-0 nova_compute[185173]: 2026-01-23 11:57:23.578 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Releasing lock "refresh_cache-ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 23 11:57:23 compute-0 nova_compute[185173]: 2026-01-23 11:57:23.579 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] [instance: ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 23 11:57:23 compute-0 nova_compute[185173]: 2026-01-23 11:57:23.579 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:57:23 compute-0 nova_compute[185173]: 2026-01-23 11:57:23.580 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:57:23 compute-0 nova_compute[185173]: 2026-01-23 11:57:23.581 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:57:25 compute-0 nova_compute[185173]: 2026-01-23 11:57:25.381 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:57:26 compute-0 nova_compute[185173]: 2026-01-23 11:57:26.702 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:57:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:57:29.102 106832 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 11:57:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:57:29.103 106832 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 11:57:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:57:29.104 106832 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 11:57:29 compute-0 podman[201022]: time="2026-01-23T11:57:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 23 11:57:29 compute-0 podman[201022]: @ - - [23/Jan/2026:11:57:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28508 "" "Go-http-client/1.1"
Jan 23 11:57:29 compute-0 podman[201022]: @ - - [23/Jan/2026:11:57:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4379 "" "Go-http-client/1.1"
Jan 23 11:57:30 compute-0 nova_compute[185173]: 2026-01-23 11:57:30.385 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:57:31 compute-0 openstack_network_exporter[204160]: ERROR   11:57:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 23 11:57:31 compute-0 openstack_network_exporter[204160]: 
Jan 23 11:57:31 compute-0 openstack_network_exporter[204160]: ERROR   11:57:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 23 11:57:31 compute-0 openstack_network_exporter[204160]: 
Jan 23 11:57:31 compute-0 nova_compute[185173]: 2026-01-23 11:57:31.703 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:57:31 compute-0 podman[242389]: 2026-01-23 11:57:31.733118055 +0000 UTC m=+0.065384259 container health_status 48bfd3e93cfb033a8917f154ab637a84f3f60f7609564292c230ce848bae7693 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Jan 23 11:57:31 compute-0 podman[242390]: 2026-01-23 11:57:31.761169397 +0000 UTC m=+0.086391447 container health_status 6ec039018dddd109dd56b3f3912ce4a80c166b5fb98c417c5e3cfbbdfbfbeaad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20260120, org.label-schema.schema-version=1.0)
Jan 23 11:57:31 compute-0 podman[242431]: 2026-01-23 11:57:31.872375104 +0000 UTC m=+0.083549084 container health_status d96827cd9c29e53bbdf4cef10942608e4ba405294733072b4aa624c0238e2ed8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Jan 23 11:57:35 compute-0 nova_compute[185173]: 2026-01-23 11:57:35.390 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:57:35 compute-0 podman[242451]: 2026-01-23 11:57:35.772894347 +0000 UTC m=+0.104188265 container health_status 1cc877fed4914980324cf4c0d6ba23743fd113442cee4d49cc1a59e402757170 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Jan 23 11:57:36 compute-0 nova_compute[185173]: 2026-01-23 11:57:36.705 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:57:40 compute-0 nova_compute[185173]: 2026-01-23 11:57:40.396 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:57:41 compute-0 nova_compute[185173]: 2026-01-23 11:57:41.707 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:57:41 compute-0 podman[242477]: 2026-01-23 11:57:41.76114152 +0000 UTC m=+0.085683802 container health_status adf529ba1b6aae11f18bcfacdd7f5850af0b6e6af2250d4a705be9c346f3f5af (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_ipmi, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 23 11:57:44 compute-0 podman[242495]: 2026-01-23 11:57:44.797039738 +0000 UTC m=+0.116297419 container health_status 900ef841977ab427bb05b895d10e0cac749b9185cccc7bb7aaf2b3886aa6449a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., name=ubi9, container_name=kepler, maintainer=Red Hat, Inc., release-0.7.12=, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, vcs-type=git, distribution-scope=public, managed_by=edpm_ansible, architecture=x86_64, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=kepler, com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, io.openshift.expose-services=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 23 11:57:45 compute-0 nova_compute[185173]: 2026-01-23 11:57:45.401 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:57:46 compute-0 nova_compute[185173]: 2026-01-23 11:57:46.711 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:57:46 compute-0 podman[242515]: 2026-01-23 11:57:46.766415016 +0000 UTC m=+0.084945894 container health_status 99ee297e6e25b500e7af118e58bbafc761d2fd7202cdfcf4c976c2a99866b5ef (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 23 11:57:50 compute-0 nova_compute[185173]: 2026-01-23 11:57:50.406 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:57:51 compute-0 nova_compute[185173]: 2026-01-23 11:57:51.712 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:57:53 compute-0 podman[242540]: 2026-01-23 11:57:53.738037946 +0000 UTC m=+0.070023410 container health_status cde20f10ae383cce1365a41265bac0a75ea71c31a21a1539f187bef9d678e8d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, release=1755695350, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, distribution-scope=public, version=9.6, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=openstack_network_exporter, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., name=ubi9-minimal)
Jan 23 11:57:55 compute-0 nova_compute[185173]: 2026-01-23 11:57:55.409 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:57:56 compute-0 nova_compute[185173]: 2026-01-23 11:57:56.714 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:57:59 compute-0 podman[201022]: time="2026-01-23T11:57:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 23 11:57:59 compute-0 podman[201022]: @ - - [23/Jan/2026:11:57:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28508 "" "Go-http-client/1.1"
Jan 23 11:57:59 compute-0 podman[201022]: @ - - [23/Jan/2026:11:57:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4381 "" "Go-http-client/1.1"
Jan 23 11:58:00 compute-0 nova_compute[185173]: 2026-01-23 11:58:00.414 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:58:01 compute-0 openstack_network_exporter[204160]: ERROR   11:58:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 23 11:58:01 compute-0 openstack_network_exporter[204160]: 
Jan 23 11:58:01 compute-0 openstack_network_exporter[204160]: ERROR   11:58:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 23 11:58:01 compute-0 openstack_network_exporter[204160]: 
Jan 23 11:58:01 compute-0 nova_compute[185173]: 2026-01-23 11:58:01.717 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:58:02 compute-0 podman[242560]: 2026-01-23 11:58:02.74198749 +0000 UTC m=+0.064434378 container health_status 48bfd3e93cfb033a8917f154ab637a84f3f60f7609564292c230ce848bae7693 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Jan 23 11:58:02 compute-0 podman[242561]: 2026-01-23 11:58:02.754755219 +0000 UTC m=+0.072635997 container health_status 6ec039018dddd109dd56b3f3912ce4a80c166b5fb98c417c5e3cfbbdfbfbeaad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20260120, org.label-schema.schema-version=1.0, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image)
Jan 23 11:58:02 compute-0 podman[242562]: 2026-01-23 11:58:02.772722321 +0000 UTC m=+0.088753639 container health_status d96827cd9c29e53bbdf4cef10942608e4ba405294733072b4aa624c0238e2ed8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 23 11:58:05 compute-0 nova_compute[185173]: 2026-01-23 11:58:05.417 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:58:06 compute-0 nova_compute[185173]: 2026-01-23 11:58:06.718 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:58:06 compute-0 podman[242628]: 2026-01-23 11:58:06.781878697 +0000 UTC m=+0.110218857 container health_status 1cc877fed4914980324cf4c0d6ba23743fd113442cee4d49cc1a59e402757170 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 11:58:08 compute-0 nova_compute[185173]: 2026-01-23 11:58:08.397 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:58:09 compute-0 nova_compute[185173]: 2026-01-23 11:58:09.366 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Triggering sync for uuid 55846fbf-a87a-4cba-be0b-23125d3d9ef4 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Jan 23 11:58:09 compute-0 nova_compute[185173]: 2026-01-23 11:58:09.367 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Triggering sync for uuid 84b3f69a-6ab7-406d-939b-a485518755a5 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Jan 23 11:58:09 compute-0 nova_compute[185173]: 2026-01-23 11:58:09.367 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Triggering sync for uuid ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Jan 23 11:58:09 compute-0 nova_compute[185173]: 2026-01-23 11:58:09.367 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Triggering sync for uuid e9de5be9-383e-4139-a192-9a00ac9030d0 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Jan 23 11:58:09 compute-0 nova_compute[185173]: 2026-01-23 11:58:09.368 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Acquiring lock "55846fbf-a87a-4cba-be0b-23125d3d9ef4" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 11:58:09 compute-0 nova_compute[185173]: 2026-01-23 11:58:09.369 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "55846fbf-a87a-4cba-be0b-23125d3d9ef4" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 11:58:09 compute-0 nova_compute[185173]: 2026-01-23 11:58:09.369 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Acquiring lock "84b3f69a-6ab7-406d-939b-a485518755a5" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 11:58:09 compute-0 nova_compute[185173]: 2026-01-23 11:58:09.370 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "84b3f69a-6ab7-406d-939b-a485518755a5" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 11:58:09 compute-0 nova_compute[185173]: 2026-01-23 11:58:09.370 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Acquiring lock "ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 11:58:09 compute-0 nova_compute[185173]: 2026-01-23 11:58:09.371 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 11:58:09 compute-0 nova_compute[185173]: 2026-01-23 11:58:09.372 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Acquiring lock "e9de5be9-383e-4139-a192-9a00ac9030d0" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 11:58:09 compute-0 nova_compute[185173]: 2026-01-23 11:58:09.373 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "e9de5be9-383e-4139-a192-9a00ac9030d0" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 11:58:10 compute-0 nova_compute[185173]: 2026-01-23 11:58:10.419 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:58:10 compute-0 nova_compute[185173]: 2026-01-23 11:58:10.731 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "55846fbf-a87a-4cba-be0b-23125d3d9ef4" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 1.362s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 11:58:10 compute-0 nova_compute[185173]: 2026-01-23 11:58:10.734 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "84b3f69a-6ab7-406d-939b-a485518755a5" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 1.364s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 11:58:10 compute-0 nova_compute[185173]: 2026-01-23 11:58:10.754 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 1.383s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 11:58:10 compute-0 nova_compute[185173]: 2026-01-23 11:58:10.786 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "e9de5be9-383e-4139-a192-9a00ac9030d0" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 1.413s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 11:58:11 compute-0 nova_compute[185173]: 2026-01-23 11:58:11.721 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:58:12 compute-0 podman[242654]: 2026-01-23 11:58:12.74612707 +0000 UTC m=+0.066574435 container health_status adf529ba1b6aae11f18bcfacdd7f5850af0b6e6af2250d4a705be9c346f3f5af (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 23 11:58:13 compute-0 nova_compute[185173]: 2026-01-23 11:58:13.210 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:58:15 compute-0 nova_compute[185173]: 2026-01-23 11:58:15.423 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:58:15 compute-0 podman[242674]: 2026-01-23 11:58:15.557013047 +0000 UTC m=+0.093259387 container health_status 900ef841977ab427bb05b895d10e0cac749b9185cccc7bb7aaf2b3886aa6449a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, name=ubi9, vcs-type=git, distribution-scope=public, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, container_name=kepler, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, version=9.4, config_id=kepler, release=1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9)
Jan 23 11:58:16 compute-0 nova_compute[185173]: 2026-01-23 11:58:16.722 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:58:17 compute-0 nova_compute[185173]: 2026-01-23 11:58:17.235 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:58:17 compute-0 nova_compute[185173]: 2026-01-23 11:58:17.236 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:58:17 compute-0 nova_compute[185173]: 2026-01-23 11:58:17.267 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 11:58:17 compute-0 nova_compute[185173]: 2026-01-23 11:58:17.268 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 11:58:17 compute-0 nova_compute[185173]: 2026-01-23 11:58:17.268 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 11:58:17 compute-0 nova_compute[185173]: 2026-01-23 11:58:17.269 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 23 11:58:17 compute-0 nova_compute[185173]: 2026-01-23 11:58:17.371 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:58:17 compute-0 nova_compute[185173]: 2026-01-23 11:58:17.432 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:58:17 compute-0 nova_compute[185173]: 2026-01-23 11:58:17.432 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:58:17 compute-0 nova_compute[185173]: 2026-01-23 11:58:17.492 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:58:17 compute-0 nova_compute[185173]: 2026-01-23 11:58:17.493 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:58:17 compute-0 nova_compute[185173]: 2026-01-23 11:58:17.553 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.eph0 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:58:17 compute-0 nova_compute[185173]: 2026-01-23 11:58:17.554 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:58:17 compute-0 nova_compute[185173]: 2026-01-23 11:58:17.614 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.eph0 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:58:17 compute-0 nova_compute[185173]: 2026-01-23 11:58:17.621 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/84b3f69a-6ab7-406d-939b-a485518755a5/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:58:17 compute-0 nova_compute[185173]: 2026-01-23 11:58:17.678 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/84b3f69a-6ab7-406d-939b-a485518755a5/disk --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:58:17 compute-0 nova_compute[185173]: 2026-01-23 11:58:17.679 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/84b3f69a-6ab7-406d-939b-a485518755a5/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:58:17 compute-0 nova_compute[185173]: 2026-01-23 11:58:17.738 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/84b3f69a-6ab7-406d-939b-a485518755a5/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:58:17 compute-0 nova_compute[185173]: 2026-01-23 11:58:17.739 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/84b3f69a-6ab7-406d-939b-a485518755a5/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:58:17 compute-0 podman[242706]: 2026-01-23 11:58:17.751829577 +0000 UTC m=+0.086980400 container health_status 99ee297e6e25b500e7af118e58bbafc761d2fd7202cdfcf4c976c2a99866b5ef (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 23 11:58:17 compute-0 nova_compute[185173]: 2026-01-23 11:58:17.806 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/84b3f69a-6ab7-406d-939b-a485518755a5/disk.eph0 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:58:17 compute-0 nova_compute[185173]: 2026-01-23 11:58:17.808 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/84b3f69a-6ab7-406d-939b-a485518755a5/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:58:17 compute-0 nova_compute[185173]: 2026-01-23 11:58:17.864 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/84b3f69a-6ab7-406d-939b-a485518755a5/disk.eph0 --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:58:17 compute-0 nova_compute[185173]: 2026-01-23 11:58:17.871 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:58:17 compute-0 nova_compute[185173]: 2026-01-23 11:58:17.930 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:58:17 compute-0 nova_compute[185173]: 2026-01-23 11:58:17.931 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:58:17 compute-0 nova_compute[185173]: 2026-01-23 11:58:17.985 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:58:17 compute-0 nova_compute[185173]: 2026-01-23 11:58:17.986 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:58:18 compute-0 nova_compute[185173]: 2026-01-23 11:58:18.044 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.eph0 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:58:18 compute-0 nova_compute[185173]: 2026-01-23 11:58:18.046 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:58:18 compute-0 nova_compute[185173]: 2026-01-23 11:58:18.104 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.eph0 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:58:18 compute-0 nova_compute[185173]: 2026-01-23 11:58:18.110 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e9de5be9-383e-4139-a192-9a00ac9030d0/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:58:18 compute-0 nova_compute[185173]: 2026-01-23 11:58:18.174 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e9de5be9-383e-4139-a192-9a00ac9030d0/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:58:18 compute-0 nova_compute[185173]: 2026-01-23 11:58:18.175 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e9de5be9-383e-4139-a192-9a00ac9030d0/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:58:18 compute-0 nova_compute[185173]: 2026-01-23 11:58:18.231 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e9de5be9-383e-4139-a192-9a00ac9030d0/disk --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:58:18 compute-0 nova_compute[185173]: 2026-01-23 11:58:18.232 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e9de5be9-383e-4139-a192-9a00ac9030d0/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:58:18 compute-0 nova_compute[185173]: 2026-01-23 11:58:18.289 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e9de5be9-383e-4139-a192-9a00ac9030d0/disk.eph0 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:58:18 compute-0 nova_compute[185173]: 2026-01-23 11:58:18.290 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e9de5be9-383e-4139-a192-9a00ac9030d0/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:58:18 compute-0 nova_compute[185173]: 2026-01-23 11:58:18.348 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e9de5be9-383e-4139-a192-9a00ac9030d0/disk.eph0 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:58:18 compute-0 nova_compute[185173]: 2026-01-23 11:58:18.719 185177 WARNING nova.virt.libvirt.driver [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 23 11:58:18 compute-0 nova_compute[185173]: 2026-01-23 11:58:18.720 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4493MB free_disk=72.35544967651367GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 23 11:58:18 compute-0 nova_compute[185173]: 2026-01-23 11:58:18.721 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 11:58:18 compute-0 nova_compute[185173]: 2026-01-23 11:58:18.721 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 11:58:19 compute-0 nova_compute[185173]: 2026-01-23 11:58:19.018 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Instance 55846fbf-a87a-4cba-be0b-23125d3d9ef4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 23 11:58:19 compute-0 nova_compute[185173]: 2026-01-23 11:58:19.018 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Instance 84b3f69a-6ab7-406d-939b-a485518755a5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 23 11:58:19 compute-0 nova_compute[185173]: 2026-01-23 11:58:19.019 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Instance ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 23 11:58:19 compute-0 nova_compute[185173]: 2026-01-23 11:58:19.019 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Instance e9de5be9-383e-4139-a192-9a00ac9030d0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 23 11:58:19 compute-0 nova_compute[185173]: 2026-01-23 11:58:19.020 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 23 11:58:19 compute-0 nova_compute[185173]: 2026-01-23 11:58:19.020 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2560MB phys_disk=79GB used_disk=8GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 23 11:58:19 compute-0 nova_compute[185173]: 2026-01-23 11:58:19.226 185177 DEBUG nova.compute.provider_tree [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Inventory has not changed in ProviderTree for provider: 77dd020c-2f5c-40b0-b660-8a95a28aabbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 23 11:58:19 compute-0 nova_compute[185173]: 2026-01-23 11:58:19.239 185177 DEBUG nova.scheduler.client.report [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Inventory has not changed for provider 77dd020c-2f5c-40b0-b660-8a95a28aabbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 23 11:58:19 compute-0 nova_compute[185173]: 2026-01-23 11:58:19.240 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 23 11:58:19 compute-0 nova_compute[185173]: 2026-01-23 11:58:19.240 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.519s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 11:58:19 compute-0 nova_compute[185173]: 2026-01-23 11:58:19.241 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:58:20 compute-0 nova_compute[185173]: 2026-01-23 11:58:20.267 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:58:20 compute-0 nova_compute[185173]: 2026-01-23 11:58:20.268 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:58:20 compute-0 nova_compute[185173]: 2026-01-23 11:58:20.317 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:58:20 compute-0 nova_compute[185173]: 2026-01-23 11:58:20.318 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 23 11:58:20 compute-0 nova_compute[185173]: 2026-01-23 11:58:20.318 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 23 11:58:20 compute-0 nova_compute[185173]: 2026-01-23 11:58:20.427 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:58:21 compute-0 nova_compute[185173]: 2026-01-23 11:58:21.144 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Acquiring lock "refresh_cache-55846fbf-a87a-4cba-be0b-23125d3d9ef4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 23 11:58:21 compute-0 nova_compute[185173]: 2026-01-23 11:58:21.144 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Acquired lock "refresh_cache-55846fbf-a87a-4cba-be0b-23125d3d9ef4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 23 11:58:21 compute-0 nova_compute[185173]: 2026-01-23 11:58:21.145 185177 DEBUG nova.network.neutron [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] [instance: 55846fbf-a87a-4cba-be0b-23125d3d9ef4] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 23 11:58:21 compute-0 nova_compute[185173]: 2026-01-23 11:58:21.145 185177 DEBUG nova.objects.instance [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 55846fbf-a87a-4cba-be0b-23125d3d9ef4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 23 11:58:21 compute-0 nova_compute[185173]: 2026-01-23 11:58:21.724 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:58:23 compute-0 nova_compute[185173]: 2026-01-23 11:58:23.178 185177 DEBUG nova.network.neutron [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] [instance: 55846fbf-a87a-4cba-be0b-23125d3d9ef4] Updating instance_info_cache with network_info: [{"id": "4c18896b-ecf0-4d1b-b901-f24edce45c11", "address": "fa:16:3e:e4:21:a1", "network": {"id": "9d2c33ef-0f52-43b5-80dd-899657aece53", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.65", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bd16a0de2f5e4a8480a855ef0e1a3f14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c18896b-ec", "ovs_interfaceid": "4c18896b-ecf0-4d1b-b901-f24edce45c11", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 23 11:58:23 compute-0 nova_compute[185173]: 2026-01-23 11:58:23.227 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Releasing lock "refresh_cache-55846fbf-a87a-4cba-be0b-23125d3d9ef4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 23 11:58:23 compute-0 nova_compute[185173]: 2026-01-23 11:58:23.227 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] [instance: 55846fbf-a87a-4cba-be0b-23125d3d9ef4] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 23 11:58:23 compute-0 nova_compute[185173]: 2026-01-23 11:58:23.228 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:58:23 compute-0 nova_compute[185173]: 2026-01-23 11:58:23.228 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:58:23 compute-0 nova_compute[185173]: 2026-01-23 11:58:23.228 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:58:23 compute-0 nova_compute[185173]: 2026-01-23 11:58:23.228 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 23 11:58:23 compute-0 nova_compute[185173]: 2026-01-23 11:58:23.234 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:58:23 compute-0 nova_compute[185173]: 2026-01-23 11:58:23.235 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:58:23 compute-0 nova_compute[185173]: 2026-01-23 11:58:23.235 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 23 11:58:24 compute-0 podman[242766]: 2026-01-23 11:58:24.75521567 +0000 UTC m=+0.074245441 container health_status cde20f10ae383cce1365a41265bac0a75ea71c31a21a1539f187bef9d678e8d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., managed_by=edpm_ansible, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, io.buildah.version=1.33.7, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, version=9.6, io.openshift.expose-services=, release=1755695350)
Jan 23 11:58:25 compute-0 nova_compute[185173]: 2026-01-23 11:58:25.431 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:58:26 compute-0 nova_compute[185173]: 2026-01-23 11:58:26.727 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:58:27 compute-0 nova_compute[185173]: 2026-01-23 11:58:27.248 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:58:27 compute-0 nova_compute[185173]: 2026-01-23 11:58:27.249 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 23 11:58:27 compute-0 nova_compute[185173]: 2026-01-23 11:58:27.270 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 23 11:58:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:58:29.104 106832 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 11:58:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:58:29.105 106832 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 11:58:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:58:29.105 106832 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 11:58:29 compute-0 podman[201022]: time="2026-01-23T11:58:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 23 11:58:29 compute-0 podman[201022]: @ - - [23/Jan/2026:11:58:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28508 "" "Go-http-client/1.1"
Jan 23 11:58:29 compute-0 podman[201022]: @ - - [23/Jan/2026:11:58:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4379 "" "Go-http-client/1.1"
Jan 23 11:58:30 compute-0 nova_compute[185173]: 2026-01-23 11:58:30.436 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:58:31 compute-0 sshd-session[242786]: Invalid user solana from 45.148.10.240 port 59326
Jan 23 11:58:31 compute-0 sshd-session[242786]: Connection closed by invalid user solana 45.148.10.240 port 59326 [preauth]
Jan 23 11:58:31 compute-0 openstack_network_exporter[204160]: ERROR   11:58:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 23 11:58:31 compute-0 openstack_network_exporter[204160]: 
Jan 23 11:58:31 compute-0 openstack_network_exporter[204160]: ERROR   11:58:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 23 11:58:31 compute-0 openstack_network_exporter[204160]: 
Jan 23 11:58:31 compute-0 nova_compute[185173]: 2026-01-23 11:58:31.730 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:58:32 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Jan 23 11:58:32 compute-0 podman[242791]: 2026-01-23 11:58:32.878866747 +0000 UTC m=+0.064083590 container health_status d96827cd9c29e53bbdf4cef10942608e4ba405294733072b4aa624c0238e2ed8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 23 11:58:32 compute-0 podman[242789]: 2026-01-23 11:58:32.891076303 +0000 UTC m=+0.076886589 container health_status 48bfd3e93cfb033a8917f154ab637a84f3f60f7609564292c230ce848bae7693 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 23 11:58:32 compute-0 podman[242790]: 2026-01-23 11:58:32.924988344 +0000 UTC m=+0.098422930 container health_status 6ec039018dddd109dd56b3f3912ce4a80c166b5fb98c417c5e3cfbbdfbfbeaad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20260120, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team)
Jan 23 11:58:35 compute-0 nova_compute[185173]: 2026-01-23 11:58:35.439 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:58:36 compute-0 nova_compute[185173]: 2026-01-23 11:58:36.732 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:58:37 compute-0 podman[242847]: 2026-01-23 11:58:37.797741217 +0000 UTC m=+0.123084588 container health_status 1cc877fed4914980324cf4c0d6ba23743fd113442cee4d49cc1a59e402757170 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 23 11:58:40 compute-0 nova_compute[185173]: 2026-01-23 11:58:40.444 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:58:41 compute-0 nova_compute[185173]: 2026-01-23 11:58:41.735 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:58:43 compute-0 podman[242871]: 2026-01-23 11:58:43.725616014 +0000 UTC m=+0.062121087 container health_status adf529ba1b6aae11f18bcfacdd7f5850af0b6e6af2250d4a705be9c346f3f5af (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi)
Jan 23 11:58:45 compute-0 nova_compute[185173]: 2026-01-23 11:58:45.447 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:58:45 compute-0 podman[242891]: 2026-01-23 11:58:45.745444234 +0000 UTC m=+0.072677538 container health_status 900ef841977ab427bb05b895d10e0cac749b9185cccc7bb7aaf2b3886aa6449a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.29.0, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, config_id=kepler, container_name=kepler, release-0.7.12=, version=9.4, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, io.openshift.expose-services=, build-date=2024-09-18T21:23:30, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., release=1214.1726694543, vendor=Red Hat, Inc.)
Jan 23 11:58:46 compute-0 nova_compute[185173]: 2026-01-23 11:58:46.737 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:58:48 compute-0 podman[242911]: 2026-01-23 11:58:48.731711762 +0000 UTC m=+0.058100300 container health_status 99ee297e6e25b500e7af118e58bbafc761d2fd7202cdfcf4c976c2a99866b5ef (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 23 11:58:50 compute-0 nova_compute[185173]: 2026-01-23 11:58:50.453 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:58:51 compute-0 nova_compute[185173]: 2026-01-23 11:58:51.739 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:58:55 compute-0 nova_compute[185173]: 2026-01-23 11:58:55.456 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:58:55 compute-0 podman[242936]: 2026-01-23 11:58:55.803094482 +0000 UTC m=+0.119745505 container health_status cde20f10ae383cce1365a41265bac0a75ea71c31a21a1539f187bef9d678e8d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, distribution-scope=public, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, version=9.6, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, architecture=x86_64, vcs-type=git, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=openstack_network_exporter, container_name=openstack_network_exporter, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 23 11:58:56 compute-0 nova_compute[185173]: 2026-01-23 11:58:56.740 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:58:59 compute-0 podman[201022]: time="2026-01-23T11:58:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 23 11:58:59 compute-0 podman[201022]: @ - - [23/Jan/2026:11:58:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28508 "" "Go-http-client/1.1"
Jan 23 11:58:59 compute-0 podman[201022]: @ - - [23/Jan/2026:11:58:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4380 "" "Go-http-client/1.1"
Jan 23 11:59:00 compute-0 nova_compute[185173]: 2026-01-23 11:59:00.460 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:59:01 compute-0 openstack_network_exporter[204160]: ERROR   11:59:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 23 11:59:01 compute-0 openstack_network_exporter[204160]: 
Jan 23 11:59:01 compute-0 openstack_network_exporter[204160]: ERROR   11:59:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 23 11:59:01 compute-0 openstack_network_exporter[204160]: 
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.455 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.455 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.455 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28433716d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.456 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f28410bc7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.457 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be810>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28433716d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.457 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be840>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28433716d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.457 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28433716d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.457 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be8a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28433716d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.458 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc8f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28433716d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.458 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be900>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28433716d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.458 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bf140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28433716d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.458 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be960>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28433716d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.458 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2842f61190>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28433716d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.459 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28411c9190>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28433716d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.459 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be9c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28433716d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.459 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bf1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28433716d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.459 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bec00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28433716d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.460 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bf440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28433716d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.460 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bec60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28433716d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.460 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2842f83560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28433716d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.460 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28433716d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.460 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc5c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28433716d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.460 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28433716d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.460 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be660>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28433716d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.461 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28433716d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.461 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28433716d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.461 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2842f1af60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28433716d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.461 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28433716d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.461 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be7b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28433716d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.463 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '55846fbf-a87a-4cba-be0b-23125d3d9ef4', 'name': 'test_0', 'flavor': {'id': 'f2c5c5dd-a580-4885-a3ab-a766eac401c8', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'c5833e41-b4db-454e-8f49-014aa18c7dc5'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'bd16a0de2f5e4a8480a855ef0e1a3f14', 'user_id': 'd9858533c2284846a8f0f19a1fb45045', 'hostId': '47f89b8956aaa9163f724166aabd4216eadbb2bd951d24f4c87e1ecb', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.466 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '84b3f69a-6ab7-406d-939b-a485518755a5', 'name': 'vn-i4gqh4k-vr2au76lt4jq-fptc6vwdy3ol-vnf-bciscawcuiyk', 'flavor': {'id': 'f2c5c5dd-a580-4885-a3ab-a766eac401c8', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'c5833e41-b4db-454e-8f49-014aa18c7dc5'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'bd16a0de2f5e4a8480a855ef0e1a3f14', 'user_id': 'd9858533c2284846a8f0f19a1fb45045', 'hostId': '47f89b8956aaa9163f724166aabd4216eadbb2bd951d24f4c87e1ecb', 'status': 'active', 'metadata': {'metering.server_group': '500baa09-1e39-474e-b275-8b2dffe3a65b'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.470 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e', 'name': 'vn-i4gqh4k-nwnahxa6hq2y-lqyj7kfebyqq-vnf-dcwk4osqlplv', 'flavor': {'id': 'f2c5c5dd-a580-4885-a3ab-a766eac401c8', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'c5833e41-b4db-454e-8f49-014aa18c7dc5'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000003', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'bd16a0de2f5e4a8480a855ef0e1a3f14', 'user_id': 'd9858533c2284846a8f0f19a1fb45045', 'hostId': '47f89b8956aaa9163f724166aabd4216eadbb2bd951d24f4c87e1ecb', 'status': 'active', 'metadata': {'metering.server_group': '500baa09-1e39-474e-b275-8b2dffe3a65b'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.472 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'e9de5be9-383e-4139-a192-9a00ac9030d0', 'name': 'vn-i4gqh4k-b64ilmmiw3co-dxxhdi3z36fs-vnf-e3wngllyc55g', 'flavor': {'id': 'f2c5c5dd-a580-4885-a3ab-a766eac401c8', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'c5833e41-b4db-454e-8f49-014aa18c7dc5'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'bd16a0de2f5e4a8480a855ef0e1a3f14', 'user_id': 'd9858533c2284846a8f0f19a1fb45045', 'hostId': '47f89b8956aaa9163f724166aabd4216eadbb2bd951d24f4c87e1ecb', 'status': 'active', 'metadata': {'metering.server_group': '500baa09-1e39-474e-b275-8b2dffe3a65b'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.473 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.473 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bc800>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.473 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bc800>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.473 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.474 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-01-23T11:59:01.473650) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.479 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.485 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.491 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.496 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/network.outgoing.bytes.delta volume: 1143 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.497 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.497 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f28410be7e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.497 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.497 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410be810>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.498 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410be810>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.498 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.498 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-01-23T11:59:01.498190) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.525 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.525 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.526 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.553 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.usage volume: 21364736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.554 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.554 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.584 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.584 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.585 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.604 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/disk.device.usage volume: 21364736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.605 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.605 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.605 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.606 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f28411c9b80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.606 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.606 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410be840>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.606 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410be840>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.606 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.607 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-01-23T11:59:01.606707) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.664 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.664 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.664 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.730 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.write.bytes volume: 41852928 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.731 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.731 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 anacron[29940]: Job `cron.weekly' started
Jan 23 11:59:01 compute-0 nova_compute[185173]: 2026-01-23 11:59:01.741 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:59:01 compute-0 anacron[29940]: Job `cron.weekly' terminated
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.792 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.793 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.793 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.848 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/disk.device.write.bytes volume: 41861120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.849 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.849 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.850 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.850 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f28410bc830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.850 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.850 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f28410be870>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.850 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.850 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410be8a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.850 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410be8a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.850 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.850 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.write.latency volume: 1669208630 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.851 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.write.latency volume: 8106790 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.851 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.851 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.write.latency volume: 804330418 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.851 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.write.latency volume: 8862519 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.851 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.852 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.device.write.latency volume: 1850558272 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.852 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.device.write.latency volume: 8667328 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.852 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.852 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/disk.device.write.latency volume: 600800165 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.853 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/disk.device.write.latency volume: 7490744 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.853 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.853 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.853 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f28410bc8c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.854 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.854 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bc8f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.854 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bc8f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.854 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.854 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.854 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.854 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.855 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.855 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.855 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f28410be8d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.855 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.855 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410be900>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.856 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410be900>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.855 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-01-23T11:59:01.850789) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.856 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.856 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-01-23T11:59:01.854443) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.856 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-01-23T11:59:01.856083) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.856 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.write.requests volume: 234 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.856 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.856 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.856 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.write.requests volume: 242 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.856 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.857 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.857 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.device.write.requests volume: 234 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.857 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.857 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.857 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/disk.device.write.requests volume: 236 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.858 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.858 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.858 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.858 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f28410bef30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.859 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.859 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bf140>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.859 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bf140>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.859 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.859 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.859 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.859 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-01-23T11:59:01.859200) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.859 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.860 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.860 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.860 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f28410be930>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.860 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.860 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410be960>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.860 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410be960>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.860 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.861 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-01-23T11:59:01.860801) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.861 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.861 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f28410be750>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.861 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.861 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f2842f61190>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.861 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f2842f61190>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.861 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.862 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.read.latency volume: 639933059 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.862 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-01-23T11:59:01.861890) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.862 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.read.latency volume: 72530295 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.862 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.read.latency volume: 43879093 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.862 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.read.latency volume: 363540160 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.862 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.read.latency volume: 61167194 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.863 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.read.latency volume: 48392812 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.863 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.device.read.latency volume: 374273377 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.863 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.device.read.latency volume: 71332104 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.863 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.device.read.latency volume: 53834488 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.863 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/disk.device.read.latency volume: 327509499 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.864 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/disk.device.read.latency volume: 57556257 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.864 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/disk.device.read.latency volume: 50069079 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.864 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.864 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f28411a4c50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.864 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.864 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28411c9190>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.865 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28411c9190>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.865 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.865 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.865 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.865 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-01-23T11:59:01.865109) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.865 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.865 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.866 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.866 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.866 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.866 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.866 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.867 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.867 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.867 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.868 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.869 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f28410be990>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.869 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.869 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410be9c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.869 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410be9c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.869 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.870 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.870 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-01-23T11:59:01.869444) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.870 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f28410bf1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.870 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.870 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bf1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.870 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bf1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.871 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.871 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.871 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.871 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-01-23T11:59:01.870986) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.871 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.872 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.872 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.872 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f28410bebd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.872 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.872 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bec00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.872 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bec00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.873 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.873 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-01-23T11:59:01.873038) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.893 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/memory.usage volume: 48.79296875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.913 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/memory.usage volume: 49.03125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.932 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/memory.usage volume: 49.07421875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.953 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/memory.usage volume: 49.01171875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.954 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.954 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f28410bf410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.954 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.954 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bf440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.954 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bf440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.955 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.955 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.955 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.955 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.956 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.956 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-01-23T11:59:01.954985) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.956 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.957 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f28410bec30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.957 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.957 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bec60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.957 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bec60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.957 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.957 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/network.incoming.bytes volume: 2136 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.958 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/network.incoming.bytes volume: 8406 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.958 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/network.incoming.bytes volume: 1570 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.958 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/network.incoming.bytes volume: 1486 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.959 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.959 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f28410bcfb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.960 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-01-23T11:59:01.957417) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.960 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.960 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f2842f83560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.960 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f2842f83560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.961 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-01-23T11:59:01.960916) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.960 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.961 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/cpu volume: 40030000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.961 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/cpu volume: 403380000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.962 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/cpu volume: 34630000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.962 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/cpu volume: 30770000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.963 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.963 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f28410bc920>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.963 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.963 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f28410bc5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.963 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.963 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bc5c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.963 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bc5c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.963 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.964 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/network.incoming.packets volume: 21 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.964 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/network.incoming.packets volume: 55 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.964 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/network.incoming.packets volume: 14 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.965 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/network.incoming.packets volume: 12 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.965 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-01-23T11:59:01.963937) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.965 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.966 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f28410bc890>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.966 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.966 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bc650>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.966 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bc650>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.966 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.966 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.966 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.967 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.967 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.967 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.967 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f28410be720>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.968 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-01-23T11:59:01.966439) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.968 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.968 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410be660>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.968 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410be660>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.968 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.968 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.969 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.969 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.969 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.969 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.970 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.970 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.970 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.971 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.971 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.971 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.972 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.972 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.972 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f28410bc6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.973 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.973 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bc680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.973 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-01-23T11:59:01.968648) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.973 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bc680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.973 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.974 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.974 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/network.outgoing.packets volume: 69 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.974 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/network.outgoing.packets volume: 21 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.974 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/network.outgoing.packets volume: 20 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.975 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-01-23T11:59:01.973926) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.975 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.976 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f28410bec90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.976 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.976 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bc6e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.976 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bc6e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.976 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.976 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.977 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.977 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.977 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-01-23T11:59:01.976532) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.978 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/network.incoming.bytes.delta volume: 98 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.978 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.978 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f284322b260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.978 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.978 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f2842f1af60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.978 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f2842f1af60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.979 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.979 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.979 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.979 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.979 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.980 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-01-23T11:59:01.978989) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.980 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.980 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.981 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.981 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.981 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.981 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.982 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.982 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.983 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.983 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f28410bc740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.983 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.983 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bc770>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.983 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bc770>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.983 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.983 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/network.outgoing.bytes volume: 2342 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.984 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/network.outgoing.bytes volume: 7746 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.984 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/network.outgoing.bytes volume: 2286 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.984 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/network.outgoing.bytes volume: 2216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.985 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.985 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-01-23T11:59:01.983792) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.985 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f28410be780>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.986 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.986 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410be7b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.986 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410be7b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.986 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.986 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.986 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.987 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-01-23T11:59:01.986409) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.987 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.987 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.988 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.988 14 DEBUG ceilometer.compute.pollsters [-] 84b3f69a-6ab7-406d-939b-a485518755a5/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.988 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.989 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.989 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.989 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.989 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.990 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.990 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.991 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.991 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.991 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.991 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.991 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.991 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.991 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.991 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.991 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.992 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.992 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.992 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.992 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.992 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.992 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.992 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.992 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.992 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.992 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.992 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.992 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.992 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.992 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.993 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.993 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:59:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 11:59:01.993 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 11:59:03 compute-0 podman[242962]: 2026-01-23 11:59:03.758656511 +0000 UTC m=+0.078650078 container health_status d96827cd9c29e53bbdf4cef10942608e4ba405294733072b4aa624c0238e2ed8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Jan 23 11:59:03 compute-0 podman[242960]: 2026-01-23 11:59:03.76959627 +0000 UTC m=+0.089793172 container health_status 48bfd3e93cfb033a8917f154ab637a84f3f60f7609564292c230ce848bae7693 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Jan 23 11:59:03 compute-0 podman[242961]: 2026-01-23 11:59:03.772903832 +0000 UTC m=+0.088760869 container health_status 6ec039018dddd109dd56b3f3912ce4a80c166b5fb98c417c5e3cfbbdfbfbeaad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, config_id=ceilometer_agent_compute, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20260120, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 23 11:59:05 compute-0 nova_compute[185173]: 2026-01-23 11:59:05.464 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:59:06 compute-0 nova_compute[185173]: 2026-01-23 11:59:06.743 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:59:08 compute-0 podman[243020]: 2026-01-23 11:59:08.816936877 +0000 UTC m=+0.132719298 container health_status 1cc877fed4914980324cf4c0d6ba23743fd113442cee4d49cc1a59e402757170 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Jan 23 11:59:10 compute-0 nova_compute[185173]: 2026-01-23 11:59:10.466 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:59:11 compute-0 nova_compute[185173]: 2026-01-23 11:59:11.746 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:59:13 compute-0 nova_compute[185173]: 2026-01-23 11:59:13.257 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:59:14 compute-0 podman[243046]: 2026-01-23 11:59:14.74877889 +0000 UTC m=+0.072572686 container health_status adf529ba1b6aae11f18bcfacdd7f5850af0b6e6af2250d4a705be9c346f3f5af (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 23 11:59:15 compute-0 nova_compute[185173]: 2026-01-23 11:59:15.472 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:59:16 compute-0 nova_compute[185173]: 2026-01-23 11:59:16.747 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:59:16 compute-0 podman[243067]: 2026-01-23 11:59:16.766783833 +0000 UTC m=+0.089919789 container health_status 900ef841977ab427bb05b895d10e0cac749b9185cccc7bb7aaf2b3886aa6449a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release=1214.1726694543, name=ubi9, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=kepler, distribution-scope=public, io.buildah.version=1.29.0, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, version=9.4, vendor=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, maintainer=Red Hat, Inc., vcs-type=git, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.tags=base rhel9, architecture=x86_64, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, build-date=2024-09-18T21:23:30)
Jan 23 11:59:18 compute-0 nova_compute[185173]: 2026-01-23 11:59:18.237 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:59:19 compute-0 nova_compute[185173]: 2026-01-23 11:59:19.231 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:59:19 compute-0 nova_compute[185173]: 2026-01-23 11:59:19.234 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:59:19 compute-0 nova_compute[185173]: 2026-01-23 11:59:19.294 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 11:59:19 compute-0 nova_compute[185173]: 2026-01-23 11:59:19.295 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 11:59:19 compute-0 nova_compute[185173]: 2026-01-23 11:59:19.296 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 11:59:19 compute-0 nova_compute[185173]: 2026-01-23 11:59:19.297 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 23 11:59:19 compute-0 nova_compute[185173]: 2026-01-23 11:59:19.435 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:59:19 compute-0 nova_compute[185173]: 2026-01-23 11:59:19.522 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:59:19 compute-0 nova_compute[185173]: 2026-01-23 11:59:19.523 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:59:19 compute-0 nova_compute[185173]: 2026-01-23 11:59:19.601 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:59:19 compute-0 nova_compute[185173]: 2026-01-23 11:59:19.604 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:59:19 compute-0 nova_compute[185173]: 2026-01-23 11:59:19.668 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.eph0 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:59:19 compute-0 nova_compute[185173]: 2026-01-23 11:59:19.670 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:59:19 compute-0 podman[243096]: 2026-01-23 11:59:19.728240109 +0000 UTC m=+0.058188583 container health_status 99ee297e6e25b500e7af118e58bbafc761d2fd7202cdfcf4c976c2a99866b5ef (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 23 11:59:19 compute-0 nova_compute[185173]: 2026-01-23 11:59:19.734 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.eph0 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:59:19 compute-0 nova_compute[185173]: 2026-01-23 11:59:19.743 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/84b3f69a-6ab7-406d-939b-a485518755a5/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:59:19 compute-0 nova_compute[185173]: 2026-01-23 11:59:19.802 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/84b3f69a-6ab7-406d-939b-a485518755a5/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:59:19 compute-0 nova_compute[185173]: 2026-01-23 11:59:19.803 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/84b3f69a-6ab7-406d-939b-a485518755a5/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:59:19 compute-0 nova_compute[185173]: 2026-01-23 11:59:19.865 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/84b3f69a-6ab7-406d-939b-a485518755a5/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:59:19 compute-0 nova_compute[185173]: 2026-01-23 11:59:19.867 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/84b3f69a-6ab7-406d-939b-a485518755a5/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:59:19 compute-0 nova_compute[185173]: 2026-01-23 11:59:19.928 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/84b3f69a-6ab7-406d-939b-a485518755a5/disk.eph0 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:59:19 compute-0 nova_compute[185173]: 2026-01-23 11:59:19.929 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/84b3f69a-6ab7-406d-939b-a485518755a5/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:59:19 compute-0 nova_compute[185173]: 2026-01-23 11:59:19.990 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/84b3f69a-6ab7-406d-939b-a485518755a5/disk.eph0 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:59:19 compute-0 nova_compute[185173]: 2026-01-23 11:59:19.996 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:59:20 compute-0 nova_compute[185173]: 2026-01-23 11:59:20.056 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:59:20 compute-0 nova_compute[185173]: 2026-01-23 11:59:20.057 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:59:20 compute-0 nova_compute[185173]: 2026-01-23 11:59:20.116 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:59:20 compute-0 nova_compute[185173]: 2026-01-23 11:59:20.117 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:59:20 compute-0 nova_compute[185173]: 2026-01-23 11:59:20.197 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.eph0 --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:59:20 compute-0 nova_compute[185173]: 2026-01-23 11:59:20.198 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:59:20 compute-0 nova_compute[185173]: 2026-01-23 11:59:20.265 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.eph0 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:59:20 compute-0 nova_compute[185173]: 2026-01-23 11:59:20.271 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e9de5be9-383e-4139-a192-9a00ac9030d0/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:59:20 compute-0 nova_compute[185173]: 2026-01-23 11:59:20.331 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e9de5be9-383e-4139-a192-9a00ac9030d0/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:59:20 compute-0 nova_compute[185173]: 2026-01-23 11:59:20.332 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e9de5be9-383e-4139-a192-9a00ac9030d0/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:59:20 compute-0 nova_compute[185173]: 2026-01-23 11:59:20.394 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e9de5be9-383e-4139-a192-9a00ac9030d0/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:59:20 compute-0 nova_compute[185173]: 2026-01-23 11:59:20.395 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e9de5be9-383e-4139-a192-9a00ac9030d0/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:59:20 compute-0 nova_compute[185173]: 2026-01-23 11:59:20.453 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e9de5be9-383e-4139-a192-9a00ac9030d0/disk.eph0 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:59:20 compute-0 nova_compute[185173]: 2026-01-23 11:59:20.454 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e9de5be9-383e-4139-a192-9a00ac9030d0/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 11:59:20 compute-0 nova_compute[185173]: 2026-01-23 11:59:20.476 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:59:20 compute-0 nova_compute[185173]: 2026-01-23 11:59:20.511 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e9de5be9-383e-4139-a192-9a00ac9030d0/disk.eph0 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 11:59:20 compute-0 nova_compute[185173]: 2026-01-23 11:59:20.873 185177 WARNING nova.virt.libvirt.driver [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 23 11:59:20 compute-0 nova_compute[185173]: 2026-01-23 11:59:20.874 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4498MB free_disk=72.3554458618164GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 23 11:59:20 compute-0 nova_compute[185173]: 2026-01-23 11:59:20.875 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 11:59:20 compute-0 nova_compute[185173]: 2026-01-23 11:59:20.875 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 11:59:20 compute-0 nova_compute[185173]: 2026-01-23 11:59:20.968 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Instance 55846fbf-a87a-4cba-be0b-23125d3d9ef4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 23 11:59:20 compute-0 nova_compute[185173]: 2026-01-23 11:59:20.969 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Instance 84b3f69a-6ab7-406d-939b-a485518755a5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 23 11:59:20 compute-0 nova_compute[185173]: 2026-01-23 11:59:20.969 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Instance ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 23 11:59:20 compute-0 nova_compute[185173]: 2026-01-23 11:59:20.969 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Instance e9de5be9-383e-4139-a192-9a00ac9030d0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 23 11:59:20 compute-0 nova_compute[185173]: 2026-01-23 11:59:20.969 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 23 11:59:20 compute-0 nova_compute[185173]: 2026-01-23 11:59:20.970 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2560MB phys_disk=79GB used_disk=8GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 23 11:59:21 compute-0 nova_compute[185173]: 2026-01-23 11:59:21.251 185177 DEBUG nova.compute.provider_tree [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Inventory has not changed in ProviderTree for provider: 77dd020c-2f5c-40b0-b660-8a95a28aabbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 23 11:59:21 compute-0 nova_compute[185173]: 2026-01-23 11:59:21.264 185177 DEBUG nova.scheduler.client.report [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Inventory has not changed for provider 77dd020c-2f5c-40b0-b660-8a95a28aabbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 23 11:59:21 compute-0 nova_compute[185173]: 2026-01-23 11:59:21.266 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 23 11:59:21 compute-0 nova_compute[185173]: 2026-01-23 11:59:21.266 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.391s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 11:59:21 compute-0 nova_compute[185173]: 2026-01-23 11:59:21.749 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:59:22 compute-0 nova_compute[185173]: 2026-01-23 11:59:22.267 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:59:22 compute-0 nova_compute[185173]: 2026-01-23 11:59:22.267 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 23 11:59:23 compute-0 nova_compute[185173]: 2026-01-23 11:59:23.163 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Acquiring lock "refresh_cache-84b3f69a-6ab7-406d-939b-a485518755a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 23 11:59:23 compute-0 nova_compute[185173]: 2026-01-23 11:59:23.164 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Acquired lock "refresh_cache-84b3f69a-6ab7-406d-939b-a485518755a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 23 11:59:23 compute-0 nova_compute[185173]: 2026-01-23 11:59:23.164 185177 DEBUG nova.network.neutron [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] [instance: 84b3f69a-6ab7-406d-939b-a485518755a5] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 23 11:59:25 compute-0 nova_compute[185173]: 2026-01-23 11:59:25.209 185177 DEBUG nova.network.neutron [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] [instance: 84b3f69a-6ab7-406d-939b-a485518755a5] Updating instance_info_cache with network_info: [{"id": "05dcc60f-5c09-47f3-9834-3594bf71b68e", "address": "fa:16:3e:40:4f:a6", "network": {"id": "9d2c33ef-0f52-43b5-80dd-899657aece53", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.62", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bd16a0de2f5e4a8480a855ef0e1a3f14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05dcc60f-5c", "ovs_interfaceid": "05dcc60f-5c09-47f3-9834-3594bf71b68e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 23 11:59:25 compute-0 nova_compute[185173]: 2026-01-23 11:59:25.441 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Releasing lock "refresh_cache-84b3f69a-6ab7-406d-939b-a485518755a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 23 11:59:25 compute-0 nova_compute[185173]: 2026-01-23 11:59:25.441 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] [instance: 84b3f69a-6ab7-406d-939b-a485518755a5] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 23 11:59:25 compute-0 nova_compute[185173]: 2026-01-23 11:59:25.441 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:59:25 compute-0 nova_compute[185173]: 2026-01-23 11:59:25.442 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:59:25 compute-0 nova_compute[185173]: 2026-01-23 11:59:25.442 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:59:25 compute-0 nova_compute[185173]: 2026-01-23 11:59:25.443 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 11:59:25 compute-0 nova_compute[185173]: 2026-01-23 11:59:25.443 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 23 11:59:25 compute-0 nova_compute[185173]: 2026-01-23 11:59:25.481 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:59:26 compute-0 podman[243161]: 2026-01-23 11:59:26.739667564 +0000 UTC m=+0.069961942 container health_status cde20f10ae383cce1365a41265bac0a75ea71c31a21a1539f187bef9d678e8d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, vcs-type=git, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, config_id=openstack_network_exporter, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64)
Jan 23 11:59:26 compute-0 nova_compute[185173]: 2026-01-23 11:59:26.751 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:59:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:59:29.106 106832 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 11:59:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:59:29.106 106832 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 11:59:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 11:59:29.107 106832 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 11:59:29 compute-0 podman[201022]: time="2026-01-23T11:59:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 23 11:59:29 compute-0 podman[201022]: @ - - [23/Jan/2026:11:59:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28508 "" "Go-http-client/1.1"
Jan 23 11:59:29 compute-0 podman[201022]: @ - - [23/Jan/2026:11:59:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4380 "" "Go-http-client/1.1"
Jan 23 11:59:30 compute-0 nova_compute[185173]: 2026-01-23 11:59:30.484 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:59:31 compute-0 openstack_network_exporter[204160]: ERROR   11:59:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 23 11:59:31 compute-0 openstack_network_exporter[204160]: 
Jan 23 11:59:31 compute-0 openstack_network_exporter[204160]: ERROR   11:59:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 23 11:59:31 compute-0 openstack_network_exporter[204160]: 
Jan 23 11:59:31 compute-0 nova_compute[185173]: 2026-01-23 11:59:31.754 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:59:34 compute-0 podman[243182]: 2026-01-23 11:59:34.764522758 +0000 UTC m=+0.099258938 container health_status 48bfd3e93cfb033a8917f154ab637a84f3f60f7609564292c230ce848bae7693 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Jan 23 11:59:34 compute-0 podman[243183]: 2026-01-23 11:59:34.767402649 +0000 UTC m=+0.096801968 container health_status 6ec039018dddd109dd56b3f3912ce4a80c166b5fb98c417c5e3cfbbdfbfbeaad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, org.label-schema.build-date=20260120, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, tcib_managed=true)
Jan 23 11:59:34 compute-0 podman[243184]: 2026-01-23 11:59:34.769415048 +0000 UTC m=+0.085663705 container health_status d96827cd9c29e53bbdf4cef10942608e4ba405294733072b4aa624c0238e2ed8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 23 11:59:35 compute-0 nova_compute[185173]: 2026-01-23 11:59:35.487 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:59:36 compute-0 nova_compute[185173]: 2026-01-23 11:59:36.756 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:59:39 compute-0 podman[243240]: 2026-01-23 11:59:39.769232733 +0000 UTC m=+0.101509133 container health_status 1cc877fed4914980324cf4c0d6ba23743fd113442cee4d49cc1a59e402757170 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Jan 23 11:59:40 compute-0 nova_compute[185173]: 2026-01-23 11:59:40.489 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:59:41 compute-0 nova_compute[185173]: 2026-01-23 11:59:41.757 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:59:45 compute-0 nova_compute[185173]: 2026-01-23 11:59:45.491 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:59:45 compute-0 podman[243264]: 2026-01-23 11:59:45.617643502 +0000 UTC m=+0.089765515 container health_status adf529ba1b6aae11f18bcfacdd7f5850af0b6e6af2250d4a705be9c346f3f5af (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 23 11:59:46 compute-0 nova_compute[185173]: 2026-01-23 11:59:46.761 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:59:47 compute-0 podman[243283]: 2026-01-23 11:59:47.741067608 +0000 UTC m=+0.070968216 container health_status 900ef841977ab427bb05b895d10e0cac749b9185cccc7bb7aaf2b3886aa6449a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., version=9.4, architecture=x86_64, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, release=1214.1726694543, container_name=kepler, release-0.7.12=, build-date=2024-09-18T21:23:30, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, io.openshift.expose-services=, name=ubi9, config_id=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, vcs-type=git, io.buildah.version=1.29.0)
Jan 23 11:59:50 compute-0 nova_compute[185173]: 2026-01-23 11:59:50.494 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:59:50 compute-0 podman[243303]: 2026-01-23 11:59:50.726704865 +0000 UTC m=+0.057650530 container health_status 99ee297e6e25b500e7af118e58bbafc761d2fd7202cdfcf4c976c2a99866b5ef (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 23 11:59:51 compute-0 nova_compute[185173]: 2026-01-23 11:59:51.765 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:59:55 compute-0 nova_compute[185173]: 2026-01-23 11:59:55.498 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:59:56 compute-0 nova_compute[185173]: 2026-01-23 11:59:56.767 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 11:59:57 compute-0 podman[243326]: 2026-01-23 11:59:57.784548754 +0000 UTC m=+0.092542624 container health_status cde20f10ae383cce1365a41265bac0a75ea71c31a21a1539f187bef9d678e8d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, vcs-type=git, architecture=x86_64, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, release=1755695350, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., config_id=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, distribution-scope=public, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, container_name=openstack_network_exporter)
Jan 23 11:59:59 compute-0 podman[201022]: time="2026-01-23T11:59:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 23 11:59:59 compute-0 podman[201022]: @ - - [23/Jan/2026:11:59:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28508 "" "Go-http-client/1.1"
Jan 23 11:59:59 compute-0 podman[201022]: @ - - [23/Jan/2026:11:59:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4380 "" "Go-http-client/1.1"
Jan 23 12:00:00 compute-0 nova_compute[185173]: 2026-01-23 12:00:00.503 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:00:01 compute-0 openstack_network_exporter[204160]: ERROR   12:00:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 23 12:00:01 compute-0 openstack_network_exporter[204160]: 
Jan 23 12:00:01 compute-0 openstack_network_exporter[204160]: ERROR   12:00:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 23 12:00:01 compute-0 openstack_network_exporter[204160]: 
Jan 23 12:00:01 compute-0 nova_compute[185173]: 2026-01-23 12:00:01.769 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:00:05 compute-0 nova_compute[185173]: 2026-01-23 12:00:05.508 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:00:05 compute-0 podman[243346]: 2026-01-23 12:00:05.741575477 +0000 UTC m=+0.073362215 container health_status 48bfd3e93cfb033a8917f154ab637a84f3f60f7609564292c230ce848bae7693 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 23 12:00:05 compute-0 podman[243348]: 2026-01-23 12:00:05.754168825 +0000 UTC m=+0.073397596 container health_status d96827cd9c29e53bbdf4cef10942608e4ba405294733072b4aa624c0238e2ed8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 23 12:00:05 compute-0 podman[243347]: 2026-01-23 12:00:05.759818003 +0000 UTC m=+0.081207257 container health_status 6ec039018dddd109dd56b3f3912ce4a80c166b5fb98c417c5e3cfbbdfbfbeaad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260120, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Jan 23 12:00:06 compute-0 nova_compute[185173]: 2026-01-23 12:00:06.770 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:00:10 compute-0 nova_compute[185173]: 2026-01-23 12:00:10.511 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:00:10 compute-0 podman[243403]: 2026-01-23 12:00:10.802438079 +0000 UTC m=+0.135696189 container health_status 1cc877fed4914980324cf4c0d6ba23743fd113442cee4d49cc1a59e402757170 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 23 12:00:10 compute-0 nova_compute[185173]: 2026-01-23 12:00:10.870 185177 DEBUG oslo_concurrency.lockutils [None req-d566cff6-a2c6-42da-9bc8-e0d72d966337 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Acquiring lock "84b3f69a-6ab7-406d-939b-a485518755a5" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 12:00:10 compute-0 nova_compute[185173]: 2026-01-23 12:00:10.872 185177 DEBUG oslo_concurrency.lockutils [None req-d566cff6-a2c6-42da-9bc8-e0d72d966337 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Lock "84b3f69a-6ab7-406d-939b-a485518755a5" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 12:00:10 compute-0 nova_compute[185173]: 2026-01-23 12:00:10.873 185177 DEBUG oslo_concurrency.lockutils [None req-d566cff6-a2c6-42da-9bc8-e0d72d966337 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Acquiring lock "84b3f69a-6ab7-406d-939b-a485518755a5-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 12:00:10 compute-0 nova_compute[185173]: 2026-01-23 12:00:10.874 185177 DEBUG oslo_concurrency.lockutils [None req-d566cff6-a2c6-42da-9bc8-e0d72d966337 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Lock "84b3f69a-6ab7-406d-939b-a485518755a5-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 12:00:10 compute-0 nova_compute[185173]: 2026-01-23 12:00:10.874 185177 DEBUG oslo_concurrency.lockutils [None req-d566cff6-a2c6-42da-9bc8-e0d72d966337 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Lock "84b3f69a-6ab7-406d-939b-a485518755a5-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 12:00:10 compute-0 nova_compute[185173]: 2026-01-23 12:00:10.882 185177 INFO nova.compute.manager [None req-d566cff6-a2c6-42da-9bc8-e0d72d966337 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: 84b3f69a-6ab7-406d-939b-a485518755a5] Terminating instance
Jan 23 12:00:10 compute-0 nova_compute[185173]: 2026-01-23 12:00:10.886 185177 DEBUG nova.compute.manager [None req-d566cff6-a2c6-42da-9bc8-e0d72d966337 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: 84b3f69a-6ab7-406d-939b-a485518755a5] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 23 12:00:10 compute-0 kernel: tap05dcc60f-5c (unregistering): left promiscuous mode
Jan 23 12:00:10 compute-0 NetworkManager[56133]: <info>  [1769169610.9361] device (tap05dcc60f-5c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 23 12:00:10 compute-0 nova_compute[185173]: 2026-01-23 12:00:10.948 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:00:10 compute-0 ovn_controller[97581]: 2026-01-23T12:00:10Z|00050|binding|INFO|Releasing lport 05dcc60f-5c09-47f3-9834-3594bf71b68e from this chassis (sb_readonly=0)
Jan 23 12:00:10 compute-0 ovn_controller[97581]: 2026-01-23T12:00:10Z|00051|binding|INFO|Setting lport 05dcc60f-5c09-47f3-9834-3594bf71b68e down in Southbound
Jan 23 12:00:10 compute-0 ovn_controller[97581]: 2026-01-23T12:00:10Z|00052|binding|INFO|Removing iface tap05dcc60f-5c ovn-installed in OVS
Jan 23 12:00:10 compute-0 nova_compute[185173]: 2026-01-23 12:00:10.959 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:00:10 compute-0 nova_compute[185173]: 2026-01-23 12:00:10.970 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:00:10 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:00:10.971 106832 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:40:4f:a6 192.168.0.62'], port_security=['fa:16:3e:40:4f:a6 192.168.0.62'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-wvvtbi4gqh4k-vr2au76lt4jq-fptc6vwdy3ol-port-pqbiurkrbamj', 'neutron:cidrs': '192.168.0.62/24', 'neutron:device_id': '84b3f69a-6ab7-406d-939b-a485518755a5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9d2c33ef-0f52-43b5-80dd-899657aece53', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-wvvtbi4gqh4k-vr2au76lt4jq-fptc6vwdy3ol-port-pqbiurkrbamj', 'neutron:project_id': 'bd16a0de2f5e4a8480a855ef0e1a3f14', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd2fa655b-b17a-4411-ab93-c6585edc77dc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.182', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=488b21ee-cabd-4ebf-9089-c8262ea2e5e6, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fceaba80790>], logical_port=05dcc60f-5c09-47f3-9834-3594bf71b68e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fceaba80790>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 23 12:00:10 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:00:10.976 106832 INFO neutron.agent.ovn.metadata.agent [-] Port 05dcc60f-5c09-47f3-9834-3594bf71b68e in datapath 9d2c33ef-0f52-43b5-80dd-899657aece53 unbound from our chassis
Jan 23 12:00:10 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:00:10.978 106832 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9d2c33ef-0f52-43b5-80dd-899657aece53
Jan 23 12:00:10 compute-0 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000002.scope: Deactivated successfully.
Jan 23 12:00:10 compute-0 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000002.scope: Consumed 7min 53.673s CPU time.
Jan 23 12:00:11 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:00:11.001 238267 DEBUG oslo.privsep.daemon [-] privsep: reply[c7105dbd-7f2e-4587-a744-fd989facc339]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 12:00:11 compute-0 systemd-machined[156550]: Machine qemu-2-instance-00000002 terminated.
Jan 23 12:00:11 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:00:11.042 238300 DEBUG oslo.privsep.daemon [-] privsep: reply[c0ac4026-a378-49f3-963b-e1fbf2513dd7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 12:00:11 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:00:11.046 238300 DEBUG oslo.privsep.daemon [-] privsep: reply[cc9500c9-8d8a-420a-a84b-40a701fbe781]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 12:00:11 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:00:11.083 238300 DEBUG oslo.privsep.daemon [-] privsep: reply[fde5e1c9-e3de-4c12-894f-ca3787ce6bcf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 12:00:11 compute-0 nova_compute[185173]: 2026-01-23 12:00:11.115 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:00:11 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:00:11.120 238267 DEBUG oslo.privsep.daemon [-] privsep: reply[919f1ba7-590c-42da-ad6b-907e860d8187]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9d2c33ef-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5b:a6:26'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 12, 'rx_bytes': 658, 'tx_bytes': 696, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 12, 'rx_bytes': 658, 'tx_bytes': 696, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 374776, 'reachable_time': 37086, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 243440, 'error': None, 'target': 'ovnmeta-9d2c33ef-0f52-43b5-80dd-899657aece53', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 12:00:11 compute-0 nova_compute[185173]: 2026-01-23 12:00:11.123 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:00:11 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:00:11.156 238267 DEBUG oslo.privsep.daemon [-] privsep: reply[f9a34fad-591b-4c4b-b899-dd9886c30cd9]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap9d2c33ef-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 374787, 'tstamp': 374787}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 243451, 'error': None, 'target': 'ovnmeta-9d2c33ef-0f52-43b5-80dd-899657aece53', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap9d2c33ef-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 374789, 'tstamp': 374789}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 243451, 'error': None, 'target': 'ovnmeta-9d2c33ef-0f52-43b5-80dd-899657aece53', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 12:00:11 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:00:11.158 106832 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9d2c33ef-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 23 12:00:11 compute-0 nova_compute[185173]: 2026-01-23 12:00:11.160 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:00:11 compute-0 nova_compute[185173]: 2026-01-23 12:00:11.167 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:00:11 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:00:11.167 106832 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9d2c33ef-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 23 12:00:11 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:00:11.167 106832 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 23 12:00:11 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:00:11.168 106832 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9d2c33ef-00, col_values=(('external_ids', {'iface-id': 'a3c84d66-2ae2-461a-92f2-b9999c7b469e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 23 12:00:11 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:00:11.168 106832 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 23 12:00:11 compute-0 nova_compute[185173]: 2026-01-23 12:00:11.194 185177 INFO nova.virt.libvirt.driver [-] [instance: 84b3f69a-6ab7-406d-939b-a485518755a5] Instance destroyed successfully.
Jan 23 12:00:11 compute-0 nova_compute[185173]: 2026-01-23 12:00:11.195 185177 DEBUG nova.objects.instance [None req-d566cff6-a2c6-42da-9bc8-e0d72d966337 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Lazy-loading 'resources' on Instance uuid 84b3f69a-6ab7-406d-939b-a485518755a5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 23 12:00:11 compute-0 nova_compute[185173]: 2026-01-23 12:00:11.210 185177 DEBUG nova.virt.libvirt.vif [None req-d566cff6-a2c6-42da-9bc8-e0d72d966337 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-23T11:48:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='vn-i4gqh4k-vr2au76lt4jq-fptc6vwdy3ol-vnf-bciscawcuiyk',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-i4gqh4k-vr2au76lt4jq-fptc6vwdy3ol-vnf-bciscawcuiyk',id=2,image_ref='c5833e41-b4db-454e-8f49-014aa18c7dc5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-23T11:48:18Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='500baa09-1e39-474e-b275-8b2dffe3a65b'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='bd16a0de2f5e4a8480a855ef0e1a3f14',ramdisk_id='',reservation_id='r-xw0cqszz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,admin,reader',image_base_image_ref='c5833e41-b4db-454e-8f49-014aa18c7dc5',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-23T11:48:18Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT00NjI0MTEzNzgwMzUzNzg1MjgxPT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTQ2MjQxMTM3ODAzNTM3ODUyODE9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09NDYyNDExMzc4MDM1Mzc4NTI4MT09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTQ2MjQxMTM3ODAzNTM3ODUyODE9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT00NjI0MTEzNzgwMzUzNzg1MjgxPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT00NjI0MTEzNzgwMzUzNzg1MjgxPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvK
Jan 23 12:00:11 compute-0 nova_compute[185173]: Cclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09NDYyNDExMzc4MDM1Mzc4NTI4MT09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTQ2MjQxMTM3ODAzNTM3ODUyODE9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT00NjI0MTEzNzgwMzUzNzg1MjgxPT0tLQo=',user_id='d9858533c2284846a8f0f19a1fb45045',uuid=84b3f69a-6ab7-406d-939b-a485518755a5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "05dcc60f-5c09-47f3-9834-3594bf71b68e", "address": "fa:16:3e:40:4f:a6", "network": {"id": "9d2c33ef-0f52-43b5-80dd-899657aece53", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.62", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bd16a0de2f5e4a8480a855ef0e1a3f14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05dcc60f-5c", "ovs_interfaceid": "05dcc60f-5c09-47f3-9834-3594bf71b68e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 23 12:00:11 compute-0 nova_compute[185173]: 2026-01-23 12:00:11.211 185177 DEBUG nova.network.os_vif_util [None req-d566cff6-a2c6-42da-9bc8-e0d72d966337 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Converting VIF {"id": "05dcc60f-5c09-47f3-9834-3594bf71b68e", "address": "fa:16:3e:40:4f:a6", "network": {"id": "9d2c33ef-0f52-43b5-80dd-899657aece53", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.62", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bd16a0de2f5e4a8480a855ef0e1a3f14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05dcc60f-5c", "ovs_interfaceid": "05dcc60f-5c09-47f3-9834-3594bf71b68e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 23 12:00:11 compute-0 nova_compute[185173]: 2026-01-23 12:00:11.211 185177 DEBUG nova.network.os_vif_util [None req-d566cff6-a2c6-42da-9bc8-e0d72d966337 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:40:4f:a6,bridge_name='br-int',has_traffic_filtering=True,id=05dcc60f-5c09-47f3-9834-3594bf71b68e,network=Network(9d2c33ef-0f52-43b5-80dd-899657aece53),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap05dcc60f-5c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 23 12:00:11 compute-0 nova_compute[185173]: 2026-01-23 12:00:11.212 185177 DEBUG os_vif [None req-d566cff6-a2c6-42da-9bc8-e0d72d966337 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:40:4f:a6,bridge_name='br-int',has_traffic_filtering=True,id=05dcc60f-5c09-47f3-9834-3594bf71b68e,network=Network(9d2c33ef-0f52-43b5-80dd-899657aece53),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap05dcc60f-5c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 23 12:00:11 compute-0 nova_compute[185173]: 2026-01-23 12:00:11.214 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:00:11 compute-0 nova_compute[185173]: 2026-01-23 12:00:11.215 185177 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap05dcc60f-5c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 23 12:00:11 compute-0 nova_compute[185173]: 2026-01-23 12:00:11.216 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:00:11 compute-0 nova_compute[185173]: 2026-01-23 12:00:11.220 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 23 12:00:11 compute-0 nova_compute[185173]: 2026-01-23 12:00:11.226 185177 INFO os_vif [None req-d566cff6-a2c6-42da-9bc8-e0d72d966337 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:40:4f:a6,bridge_name='br-int',has_traffic_filtering=True,id=05dcc60f-5c09-47f3-9834-3594bf71b68e,network=Network(9d2c33ef-0f52-43b5-80dd-899657aece53),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap05dcc60f-5c')
Jan 23 12:00:11 compute-0 nova_compute[185173]: 2026-01-23 12:00:11.228 185177 INFO nova.virt.libvirt.driver [None req-d566cff6-a2c6-42da-9bc8-e0d72d966337 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: 84b3f69a-6ab7-406d-939b-a485518755a5] Deleting instance files /var/lib/nova/instances/84b3f69a-6ab7-406d-939b-a485518755a5_del
Jan 23 12:00:11 compute-0 nova_compute[185173]: 2026-01-23 12:00:11.230 185177 INFO nova.virt.libvirt.driver [None req-d566cff6-a2c6-42da-9bc8-e0d72d966337 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: 84b3f69a-6ab7-406d-939b-a485518755a5] Deletion of /var/lib/nova/instances/84b3f69a-6ab7-406d-939b-a485518755a5_del complete
Jan 23 12:00:11 compute-0 nova_compute[185173]: 2026-01-23 12:00:11.316 185177 DEBUG nova.virt.libvirt.host [None req-d566cff6-a2c6-42da-9bc8-e0d72d966337 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754
Jan 23 12:00:11 compute-0 nova_compute[185173]: 2026-01-23 12:00:11.317 185177 INFO nova.virt.libvirt.host [None req-d566cff6-a2c6-42da-9bc8-e0d72d966337 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] UEFI support detected
Jan 23 12:00:11 compute-0 nova_compute[185173]: 2026-01-23 12:00:11.320 185177 INFO nova.compute.manager [None req-d566cff6-a2c6-42da-9bc8-e0d72d966337 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: 84b3f69a-6ab7-406d-939b-a485518755a5] Took 0.43 seconds to destroy the instance on the hypervisor.
Jan 23 12:00:11 compute-0 nova_compute[185173]: 2026-01-23 12:00:11.321 185177 DEBUG oslo.service.loopingcall [None req-d566cff6-a2c6-42da-9bc8-e0d72d966337 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 23 12:00:11 compute-0 nova_compute[185173]: 2026-01-23 12:00:11.322 185177 DEBUG nova.compute.manager [-] [instance: 84b3f69a-6ab7-406d-939b-a485518755a5] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 23 12:00:11 compute-0 nova_compute[185173]: 2026-01-23 12:00:11.323 185177 DEBUG nova.network.neutron [-] [instance: 84b3f69a-6ab7-406d-939b-a485518755a5] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 23 12:00:11 compute-0 rsyslogd[235472]: message too long (8192) with configured size 8096, begin of message is: 2026-01-23 12:00:11.210 185177 DEBUG nova.virt.libvirt.vif [None req-d566cff6-a2 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 23 12:00:11 compute-0 nova_compute[185173]: 2026-01-23 12:00:11.505 185177 DEBUG nova.compute.manager [req-07f8a82d-3af0-49d2-ae9f-d0046b3d03ce req-b4b1c92c-f633-45b3-9d33-42945ec3055b e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] [instance: 84b3f69a-6ab7-406d-939b-a485518755a5] Received event network-vif-unplugged-05dcc60f-5c09-47f3-9834-3594bf71b68e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 23 12:00:11 compute-0 nova_compute[185173]: 2026-01-23 12:00:11.505 185177 DEBUG oslo_concurrency.lockutils [req-07f8a82d-3af0-49d2-ae9f-d0046b3d03ce req-b4b1c92c-f633-45b3-9d33-42945ec3055b e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] Acquiring lock "84b3f69a-6ab7-406d-939b-a485518755a5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 12:00:11 compute-0 nova_compute[185173]: 2026-01-23 12:00:11.506 185177 DEBUG oslo_concurrency.lockutils [req-07f8a82d-3af0-49d2-ae9f-d0046b3d03ce req-b4b1c92c-f633-45b3-9d33-42945ec3055b e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] Lock "84b3f69a-6ab7-406d-939b-a485518755a5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 12:00:11 compute-0 nova_compute[185173]: 2026-01-23 12:00:11.506 185177 DEBUG oslo_concurrency.lockutils [req-07f8a82d-3af0-49d2-ae9f-d0046b3d03ce req-b4b1c92c-f633-45b3-9d33-42945ec3055b e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] Lock "84b3f69a-6ab7-406d-939b-a485518755a5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 12:00:11 compute-0 nova_compute[185173]: 2026-01-23 12:00:11.506 185177 DEBUG nova.compute.manager [req-07f8a82d-3af0-49d2-ae9f-d0046b3d03ce req-b4b1c92c-f633-45b3-9d33-42945ec3055b e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] [instance: 84b3f69a-6ab7-406d-939b-a485518755a5] No waiting events found dispatching network-vif-unplugged-05dcc60f-5c09-47f3-9834-3594bf71b68e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 23 12:00:11 compute-0 nova_compute[185173]: 2026-01-23 12:00:11.506 185177 DEBUG nova.compute.manager [req-07f8a82d-3af0-49d2-ae9f-d0046b3d03ce req-b4b1c92c-f633-45b3-9d33-42945ec3055b e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] [instance: 84b3f69a-6ab7-406d-939b-a485518755a5] Received event network-vif-unplugged-05dcc60f-5c09-47f3-9834-3594bf71b68e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 23 12:00:11 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:00:11.586 106832 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:21:44', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '86:2e:09:c4:2a:53'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 23 12:00:11 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:00:11.587 106832 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 23 12:00:11 compute-0 nova_compute[185173]: 2026-01-23 12:00:11.587 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:00:11 compute-0 nova_compute[185173]: 2026-01-23 12:00:11.773 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:00:12 compute-0 nova_compute[185173]: 2026-01-23 12:00:12.372 185177 DEBUG nova.compute.manager [req-a616e0f1-b96d-4b9f-8f1e-b2d2b5fa438d req-e6ff9dd1-2265-41c0-bfc9-f71ae49f0cf9 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] [instance: 84b3f69a-6ab7-406d-939b-a485518755a5] Received event network-changed-05dcc60f-5c09-47f3-9834-3594bf71b68e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 23 12:00:12 compute-0 nova_compute[185173]: 2026-01-23 12:00:12.372 185177 DEBUG nova.compute.manager [req-a616e0f1-b96d-4b9f-8f1e-b2d2b5fa438d req-e6ff9dd1-2265-41c0-bfc9-f71ae49f0cf9 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] [instance: 84b3f69a-6ab7-406d-939b-a485518755a5] Refreshing instance network info cache due to event network-changed-05dcc60f-5c09-47f3-9834-3594bf71b68e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 23 12:00:12 compute-0 nova_compute[185173]: 2026-01-23 12:00:12.373 185177 DEBUG oslo_concurrency.lockutils [req-a616e0f1-b96d-4b9f-8f1e-b2d2b5fa438d req-e6ff9dd1-2265-41c0-bfc9-f71ae49f0cf9 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] Acquiring lock "refresh_cache-84b3f69a-6ab7-406d-939b-a485518755a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 23 12:00:12 compute-0 nova_compute[185173]: 2026-01-23 12:00:12.373 185177 DEBUG oslo_concurrency.lockutils [req-a616e0f1-b96d-4b9f-8f1e-b2d2b5fa438d req-e6ff9dd1-2265-41c0-bfc9-f71ae49f0cf9 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] Acquired lock "refresh_cache-84b3f69a-6ab7-406d-939b-a485518755a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 23 12:00:12 compute-0 nova_compute[185173]: 2026-01-23 12:00:12.373 185177 DEBUG nova.network.neutron [req-a616e0f1-b96d-4b9f-8f1e-b2d2b5fa438d req-e6ff9dd1-2265-41c0-bfc9-f71ae49f0cf9 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] [instance: 84b3f69a-6ab7-406d-939b-a485518755a5] Refreshing network info cache for port 05dcc60f-5c09-47f3-9834-3594bf71b68e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 23 12:00:13 compute-0 nova_compute[185173]: 2026-01-23 12:00:13.236 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:00:13 compute-0 nova_compute[185173]: 2026-01-23 12:00:13.606 185177 DEBUG nova.network.neutron [-] [instance: 84b3f69a-6ab7-406d-939b-a485518755a5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 23 12:00:13 compute-0 nova_compute[185173]: 2026-01-23 12:00:13.610 185177 DEBUG nova.compute.manager [req-181d5f79-7ea7-4df4-adeb-b2131b837ffa req-33d58e32-4e78-4c7b-8ffc-7600365d2f4d e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] [instance: 84b3f69a-6ab7-406d-939b-a485518755a5] Received event network-vif-plugged-05dcc60f-5c09-47f3-9834-3594bf71b68e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 23 12:00:13 compute-0 nova_compute[185173]: 2026-01-23 12:00:13.610 185177 DEBUG oslo_concurrency.lockutils [req-181d5f79-7ea7-4df4-adeb-b2131b837ffa req-33d58e32-4e78-4c7b-8ffc-7600365d2f4d e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] Acquiring lock "84b3f69a-6ab7-406d-939b-a485518755a5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 12:00:13 compute-0 nova_compute[185173]: 2026-01-23 12:00:13.610 185177 DEBUG oslo_concurrency.lockutils [req-181d5f79-7ea7-4df4-adeb-b2131b837ffa req-33d58e32-4e78-4c7b-8ffc-7600365d2f4d e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] Lock "84b3f69a-6ab7-406d-939b-a485518755a5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 12:00:13 compute-0 nova_compute[185173]: 2026-01-23 12:00:13.610 185177 DEBUG oslo_concurrency.lockutils [req-181d5f79-7ea7-4df4-adeb-b2131b837ffa req-33d58e32-4e78-4c7b-8ffc-7600365d2f4d e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] Lock "84b3f69a-6ab7-406d-939b-a485518755a5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 12:00:13 compute-0 nova_compute[185173]: 2026-01-23 12:00:13.611 185177 DEBUG nova.compute.manager [req-181d5f79-7ea7-4df4-adeb-b2131b837ffa req-33d58e32-4e78-4c7b-8ffc-7600365d2f4d e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] [instance: 84b3f69a-6ab7-406d-939b-a485518755a5] No waiting events found dispatching network-vif-plugged-05dcc60f-5c09-47f3-9834-3594bf71b68e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 23 12:00:13 compute-0 nova_compute[185173]: 2026-01-23 12:00:13.611 185177 WARNING nova.compute.manager [req-181d5f79-7ea7-4df4-adeb-b2131b837ffa req-33d58e32-4e78-4c7b-8ffc-7600365d2f4d e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] [instance: 84b3f69a-6ab7-406d-939b-a485518755a5] Received unexpected event network-vif-plugged-05dcc60f-5c09-47f3-9834-3594bf71b68e for instance with vm_state active and task_state deleting.
Jan 23 12:00:13 compute-0 nova_compute[185173]: 2026-01-23 12:00:13.792 185177 INFO nova.compute.manager [-] [instance: 84b3f69a-6ab7-406d-939b-a485518755a5] Took 2.47 seconds to deallocate network for instance.
Jan 23 12:00:13 compute-0 nova_compute[185173]: 2026-01-23 12:00:13.951 185177 DEBUG oslo_concurrency.lockutils [None req-d566cff6-a2c6-42da-9bc8-e0d72d966337 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 12:00:13 compute-0 nova_compute[185173]: 2026-01-23 12:00:13.951 185177 DEBUG oslo_concurrency.lockutils [None req-d566cff6-a2c6-42da-9bc8-e0d72d966337 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 12:00:14 compute-0 nova_compute[185173]: 2026-01-23 12:00:14.069 185177 DEBUG nova.compute.provider_tree [None req-d566cff6-a2c6-42da-9bc8-e0d72d966337 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Inventory has not changed in ProviderTree for provider: 77dd020c-2f5c-40b0-b660-8a95a28aabbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 23 12:00:14 compute-0 nova_compute[185173]: 2026-01-23 12:00:14.158 185177 DEBUG nova.scheduler.client.report [None req-d566cff6-a2c6-42da-9bc8-e0d72d966337 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Inventory has not changed for provider 77dd020c-2f5c-40b0-b660-8a95a28aabbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 23 12:00:14 compute-0 nova_compute[185173]: 2026-01-23 12:00:14.188 185177 DEBUG oslo_concurrency.lockutils [None req-d566cff6-a2c6-42da-9bc8-e0d72d966337 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.237s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 12:00:14 compute-0 nova_compute[185173]: 2026-01-23 12:00:14.225 185177 DEBUG nova.network.neutron [req-a616e0f1-b96d-4b9f-8f1e-b2d2b5fa438d req-e6ff9dd1-2265-41c0-bfc9-f71ae49f0cf9 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] [instance: 84b3f69a-6ab7-406d-939b-a485518755a5] Updated VIF entry in instance network info cache for port 05dcc60f-5c09-47f3-9834-3594bf71b68e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 23 12:00:14 compute-0 nova_compute[185173]: 2026-01-23 12:00:14.226 185177 DEBUG nova.network.neutron [req-a616e0f1-b96d-4b9f-8f1e-b2d2b5fa438d req-e6ff9dd1-2265-41c0-bfc9-f71ae49f0cf9 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] [instance: 84b3f69a-6ab7-406d-939b-a485518755a5] Updating instance_info_cache with network_info: [{"id": "05dcc60f-5c09-47f3-9834-3594bf71b68e", "address": "fa:16:3e:40:4f:a6", "network": {"id": "9d2c33ef-0f52-43b5-80dd-899657aece53", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.62", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bd16a0de2f5e4a8480a855ef0e1a3f14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05dcc60f-5c", "ovs_interfaceid": "05dcc60f-5c09-47f3-9834-3594bf71b68e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 23 12:00:14 compute-0 nova_compute[185173]: 2026-01-23 12:00:14.237 185177 INFO nova.scheduler.client.report [None req-d566cff6-a2c6-42da-9bc8-e0d72d966337 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Deleted allocations for instance 84b3f69a-6ab7-406d-939b-a485518755a5
Jan 23 12:00:14 compute-0 nova_compute[185173]: 2026-01-23 12:00:14.472 185177 DEBUG oslo_concurrency.lockutils [req-a616e0f1-b96d-4b9f-8f1e-b2d2b5fa438d req-e6ff9dd1-2265-41c0-bfc9-f71ae49f0cf9 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] Releasing lock "refresh_cache-84b3f69a-6ab7-406d-939b-a485518755a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 23 12:00:14 compute-0 nova_compute[185173]: 2026-01-23 12:00:14.604 185177 DEBUG oslo_concurrency.lockutils [None req-d566cff6-a2c6-42da-9bc8-e0d72d966337 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Lock "84b3f69a-6ab7-406d-939b-a485518755a5" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.733s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 12:00:15 compute-0 podman[243463]: 2026-01-23 12:00:15.772445904 +0000 UTC m=+0.091711043 container health_status adf529ba1b6aae11f18bcfacdd7f5850af0b6e6af2250d4a705be9c346f3f5af (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ceilometer_agent_ipmi, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 23 12:00:16 compute-0 nova_compute[185173]: 2026-01-23 12:00:16.216 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:00:16 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:00:16.590 106832 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9a136bfd-345f-428f-a7f6-d55531120214, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 23 12:00:16 compute-0 nova_compute[185173]: 2026-01-23 12:00:16.775 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:00:18 compute-0 podman[243480]: 2026-01-23 12:00:18.770795402 +0000 UTC m=+0.099610716 container health_status 900ef841977ab427bb05b895d10e0cac749b9185cccc7bb7aaf2b3886aa6449a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, container_name=kepler, io.openshift.expose-services=, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, maintainer=Red Hat, Inc., managed_by=edpm_ansible, com.redhat.component=ubi9-container, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, name=ubi9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 23 12:00:20 compute-0 nova_compute[185173]: 2026-01-23 12:00:20.235 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:00:21 compute-0 nova_compute[185173]: 2026-01-23 12:00:21.218 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:00:21 compute-0 nova_compute[185173]: 2026-01-23 12:00:21.229 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:00:21 compute-0 nova_compute[185173]: 2026-01-23 12:00:21.234 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:00:21 compute-0 nova_compute[185173]: 2026-01-23 12:00:21.234 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 23 12:00:21 compute-0 nova_compute[185173]: 2026-01-23 12:00:21.235 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:00:21 compute-0 nova_compute[185173]: 2026-01-23 12:00:21.261 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 12:00:21 compute-0 nova_compute[185173]: 2026-01-23 12:00:21.262 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 12:00:21 compute-0 nova_compute[185173]: 2026-01-23 12:00:21.263 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 12:00:21 compute-0 nova_compute[185173]: 2026-01-23 12:00:21.263 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 23 12:00:21 compute-0 nova_compute[185173]: 2026-01-23 12:00:21.358 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 12:00:21 compute-0 podman[243502]: 2026-01-23 12:00:21.369301775 +0000 UTC m=+0.054467543 container health_status 99ee297e6e25b500e7af118e58bbafc761d2fd7202cdfcf4c976c2a99866b5ef (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 23 12:00:21 compute-0 nova_compute[185173]: 2026-01-23 12:00:21.421 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 12:00:21 compute-0 nova_compute[185173]: 2026-01-23 12:00:21.422 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 12:00:21 compute-0 nova_compute[185173]: 2026-01-23 12:00:21.478 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 12:00:21 compute-0 nova_compute[185173]: 2026-01-23 12:00:21.484 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 12:00:21 compute-0 nova_compute[185173]: 2026-01-23 12:00:21.543 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.eph0 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 12:00:21 compute-0 nova_compute[185173]: 2026-01-23 12:00:21.544 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 12:00:21 compute-0 nova_compute[185173]: 2026-01-23 12:00:21.598 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.eph0 --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 12:00:21 compute-0 nova_compute[185173]: 2026-01-23 12:00:21.604 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 12:00:21 compute-0 nova_compute[185173]: 2026-01-23 12:00:21.661 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 12:00:21 compute-0 nova_compute[185173]: 2026-01-23 12:00:21.662 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 12:00:21 compute-0 nova_compute[185173]: 2026-01-23 12:00:21.720 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 12:00:21 compute-0 nova_compute[185173]: 2026-01-23 12:00:21.721 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 12:00:21 compute-0 nova_compute[185173]: 2026-01-23 12:00:21.776 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:00:21 compute-0 nova_compute[185173]: 2026-01-23 12:00:21.786 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.eph0 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 12:00:21 compute-0 nova_compute[185173]: 2026-01-23 12:00:21.786 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 12:00:21 compute-0 nova_compute[185173]: 2026-01-23 12:00:21.841 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.eph0 --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 12:00:21 compute-0 nova_compute[185173]: 2026-01-23 12:00:21.847 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e9de5be9-383e-4139-a192-9a00ac9030d0/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 12:00:21 compute-0 nova_compute[185173]: 2026-01-23 12:00:21.904 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e9de5be9-383e-4139-a192-9a00ac9030d0/disk --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 12:00:21 compute-0 nova_compute[185173]: 2026-01-23 12:00:21.905 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e9de5be9-383e-4139-a192-9a00ac9030d0/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 12:00:21 compute-0 nova_compute[185173]: 2026-01-23 12:00:21.985 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e9de5be9-383e-4139-a192-9a00ac9030d0/disk --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 12:00:21 compute-0 nova_compute[185173]: 2026-01-23 12:00:21.986 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e9de5be9-383e-4139-a192-9a00ac9030d0/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 12:00:22 compute-0 nova_compute[185173]: 2026-01-23 12:00:22.044 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e9de5be9-383e-4139-a192-9a00ac9030d0/disk.eph0 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 12:00:22 compute-0 nova_compute[185173]: 2026-01-23 12:00:22.045 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e9de5be9-383e-4139-a192-9a00ac9030d0/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 12:00:22 compute-0 nova_compute[185173]: 2026-01-23 12:00:22.099 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e9de5be9-383e-4139-a192-9a00ac9030d0/disk.eph0 --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 12:00:22 compute-0 nova_compute[185173]: 2026-01-23 12:00:22.428 185177 WARNING nova.virt.libvirt.driver [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 23 12:00:22 compute-0 nova_compute[185173]: 2026-01-23 12:00:22.429 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4741MB free_disk=72.37800979614258GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 23 12:00:22 compute-0 nova_compute[185173]: 2026-01-23 12:00:22.430 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 12:00:22 compute-0 nova_compute[185173]: 2026-01-23 12:00:22.430 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 12:00:22 compute-0 nova_compute[185173]: 2026-01-23 12:00:22.679 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Instance 55846fbf-a87a-4cba-be0b-23125d3d9ef4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 23 12:00:22 compute-0 nova_compute[185173]: 2026-01-23 12:00:22.679 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Instance ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 23 12:00:22 compute-0 nova_compute[185173]: 2026-01-23 12:00:22.679 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Instance e9de5be9-383e-4139-a192-9a00ac9030d0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 23 12:00:22 compute-0 nova_compute[185173]: 2026-01-23 12:00:22.679 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 23 12:00:22 compute-0 nova_compute[185173]: 2026-01-23 12:00:22.679 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2048MB phys_disk=79GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 23 12:00:22 compute-0 nova_compute[185173]: 2026-01-23 12:00:22.764 185177 DEBUG nova.compute.provider_tree [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Inventory has not changed in ProviderTree for provider: 77dd020c-2f5c-40b0-b660-8a95a28aabbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 23 12:00:22 compute-0 nova_compute[185173]: 2026-01-23 12:00:22.783 185177 DEBUG nova.scheduler.client.report [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Inventory has not changed for provider 77dd020c-2f5c-40b0-b660-8a95a28aabbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 23 12:00:22 compute-0 nova_compute[185173]: 2026-01-23 12:00:22.815 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 23 12:00:22 compute-0 nova_compute[185173]: 2026-01-23 12:00:22.815 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.385s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 12:00:23 compute-0 nova_compute[185173]: 2026-01-23 12:00:23.812 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:00:23 compute-0 nova_compute[185173]: 2026-01-23 12:00:23.830 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:00:23 compute-0 nova_compute[185173]: 2026-01-23 12:00:23.831 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 23 12:00:24 compute-0 nova_compute[185173]: 2026-01-23 12:00:24.038 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Acquiring lock "refresh_cache-ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 23 12:00:24 compute-0 nova_compute[185173]: 2026-01-23 12:00:24.038 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Acquired lock "refresh_cache-ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 23 12:00:24 compute-0 nova_compute[185173]: 2026-01-23 12:00:24.038 185177 DEBUG nova.network.neutron [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] [instance: ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 23 12:00:26 compute-0 nova_compute[185173]: 2026-01-23 12:00:26.190 185177 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769169611.1888938, 84b3f69a-6ab7-406d-939b-a485518755a5 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 23 12:00:26 compute-0 nova_compute[185173]: 2026-01-23 12:00:26.191 185177 INFO nova.compute.manager [-] [instance: 84b3f69a-6ab7-406d-939b-a485518755a5] VM Stopped (Lifecycle Event)
Jan 23 12:00:26 compute-0 nova_compute[185173]: 2026-01-23 12:00:26.210 185177 DEBUG nova.compute.manager [None req-eab44184-762f-467c-8ad9-780488ebce69 - - - - - -] [instance: 84b3f69a-6ab7-406d-939b-a485518755a5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 23 12:00:26 compute-0 nova_compute[185173]: 2026-01-23 12:00:26.220 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:00:26 compute-0 nova_compute[185173]: 2026-01-23 12:00:26.778 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:00:28 compute-0 podman[243563]: 2026-01-23 12:00:28.754066698 +0000 UTC m=+0.073621771 container health_status cde20f10ae383cce1365a41265bac0a75ea71c31a21a1539f187bef9d678e8d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, config_id=openstack_network_exporter, maintainer=Red Hat, Inc., name=ubi9-minimal, io.openshift.tags=minimal rhel9, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, architecture=x86_64, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.openshift.expose-services=)
Jan 23 12:00:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:00:29.107 106832 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 12:00:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:00:29.108 106832 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 12:00:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:00:29.109 106832 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 12:00:29 compute-0 nova_compute[185173]: 2026-01-23 12:00:29.216 185177 DEBUG nova.network.neutron [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] [instance: ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e] Updating instance_info_cache with network_info: [{"id": "b9b63bb2-5fc6-48b1-8945-ac43ce6e954e", "address": "fa:16:3e:fa:bc:bc", "network": {"id": "9d2c33ef-0f52-43b5-80dd-899657aece53", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.99", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bd16a0de2f5e4a8480a855ef0e1a3f14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9b63bb2-5f", "ovs_interfaceid": "b9b63bb2-5fc6-48b1-8945-ac43ce6e954e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 23 12:00:29 compute-0 nova_compute[185173]: 2026-01-23 12:00:29.229 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Releasing lock "refresh_cache-ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 23 12:00:29 compute-0 nova_compute[185173]: 2026-01-23 12:00:29.230 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] [instance: ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 23 12:00:29 compute-0 nova_compute[185173]: 2026-01-23 12:00:29.230 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:00:29 compute-0 nova_compute[185173]: 2026-01-23 12:00:29.231 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:00:29 compute-0 nova_compute[185173]: 2026-01-23 12:00:29.231 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:00:29 compute-0 podman[201022]: time="2026-01-23T12:00:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 23 12:00:29 compute-0 podman[201022]: @ - - [23/Jan/2026:12:00:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28508 "" "Go-http-client/1.1"
Jan 23 12:00:29 compute-0 podman[201022]: @ - - [23/Jan/2026:12:00:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4384 "" "Go-http-client/1.1"
Jan 23 12:00:31 compute-0 nova_compute[185173]: 2026-01-23 12:00:31.222 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:00:31 compute-0 openstack_network_exporter[204160]: ERROR   12:00:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 23 12:00:31 compute-0 openstack_network_exporter[204160]: 
Jan 23 12:00:31 compute-0 openstack_network_exporter[204160]: ERROR   12:00:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 23 12:00:31 compute-0 openstack_network_exporter[204160]: 
Jan 23 12:00:31 compute-0 nova_compute[185173]: 2026-01-23 12:00:31.781 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:00:36 compute-0 nova_compute[185173]: 2026-01-23 12:00:36.225 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:00:36 compute-0 podman[243583]: 2026-01-23 12:00:36.740768296 +0000 UTC m=+0.059095715 container health_status 48bfd3e93cfb033a8917f154ab637a84f3f60f7609564292c230ce848bae7693 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 23 12:00:36 compute-0 podman[243584]: 2026-01-23 12:00:36.769720284 +0000 UTC m=+0.084166079 container health_status 6ec039018dddd109dd56b3f3912ce4a80c166b5fb98c417c5e3cfbbdfbfbeaad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260120, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, config_id=ceilometer_agent_compute)
Jan 23 12:00:37 compute-0 podman[243585]: 2026-01-23 12:00:37.681082297 +0000 UTC m=+0.989454403 container health_status d96827cd9c29e53bbdf4cef10942608e4ba405294733072b4aa624c0238e2ed8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 23 12:00:37 compute-0 nova_compute[185173]: 2026-01-23 12:00:37.682 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:00:41 compute-0 nova_compute[185173]: 2026-01-23 12:00:41.228 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:00:41 compute-0 podman[243643]: 2026-01-23 12:00:41.765160989 +0000 UTC m=+0.102331323 container health_status 1cc877fed4914980324cf4c0d6ba23743fd113442cee4d49cc1a59e402757170 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller)
Jan 23 12:00:42 compute-0 nova_compute[185173]: 2026-01-23 12:00:42.685 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:00:45 compute-0 ovn_controller[97581]: 2026-01-23T12:00:45Z|00053|memory_trim|INFO|Detected inactivity (last active 30007 ms ago): trimming memory
Jan 23 12:00:46 compute-0 nova_compute[185173]: 2026-01-23 12:00:46.231 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:00:46 compute-0 podman[243669]: 2026-01-23 12:00:46.776447139 +0000 UTC m=+0.105506470 container health_status adf529ba1b6aae11f18bcfacdd7f5850af0b6e6af2250d4a705be9c346f3f5af (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 23 12:00:47 compute-0 nova_compute[185173]: 2026-01-23 12:00:47.688 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:00:49 compute-0 podman[243690]: 2026-01-23 12:00:49.741394368 +0000 UTC m=+0.074602164 container health_status 900ef841977ab427bb05b895d10e0cac749b9185cccc7bb7aaf2b3886aa6449a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, io.buildah.version=1.29.0, distribution-scope=public, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., vcs-type=git, version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, container_name=kepler, release-0.7.12=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 23 12:00:51 compute-0 nova_compute[185173]: 2026-01-23 12:00:51.234 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:00:51 compute-0 podman[243710]: 2026-01-23 12:00:51.734600682 +0000 UTC m=+0.065212325 container health_status 99ee297e6e25b500e7af118e58bbafc761d2fd7202cdfcf4c976c2a99866b5ef (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 23 12:00:52 compute-0 sshd-session[243723]: Invalid user sol from 45.148.10.240 port 38412
Jan 23 12:00:52 compute-0 sshd-session[243723]: Connection closed by invalid user sol 45.148.10.240 port 38412 [preauth]
Jan 23 12:00:52 compute-0 nova_compute[185173]: 2026-01-23 12:00:52.690 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:00:56 compute-0 nova_compute[185173]: 2026-01-23 12:00:56.236 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:00:57 compute-0 nova_compute[185173]: 2026-01-23 12:00:57.694 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:00:59 compute-0 podman[243734]: 2026-01-23 12:00:59.73451725 +0000 UTC m=+0.063892324 container health_status cde20f10ae383cce1365a41265bac0a75ea71c31a21a1539f187bef9d678e8d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vcs-type=git, version=9.6, container_name=openstack_network_exporter, release=1755695350, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=openstack_network_exporter, io.buildah.version=1.33.7, vendor=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=minimal rhel9, architecture=x86_64)
Jan 23 12:00:59 compute-0 podman[201022]: time="2026-01-23T12:00:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 23 12:00:59 compute-0 podman[201022]: @ - - [23/Jan/2026:12:00:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28508 "" "Go-http-client/1.1"
Jan 23 12:00:59 compute-0 podman[201022]: @ - - [23/Jan/2026:12:00:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4380 "" "Go-http-client/1.1"
Jan 23 12:01:01 compute-0 nova_compute[185173]: 2026-01-23 12:01:01.238 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:01:01 compute-0 openstack_network_exporter[204160]: ERROR   12:01:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 23 12:01:01 compute-0 openstack_network_exporter[204160]: 
Jan 23 12:01:01 compute-0 openstack_network_exporter[204160]: ERROR   12:01:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 23 12:01:01 compute-0 openstack_network_exporter[204160]: 
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.455 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.456 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.456 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2843372f60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.456 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f28410bc7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.457 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be810>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2843372f60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.458 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be840>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2843372f60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.458 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2843372f60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.458 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be8a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2843372f60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.458 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc8f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2843372f60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.458 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be900>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2843372f60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.458 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bf140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2843372f60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.458 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be960>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2843372f60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.458 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2842f61190>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2843372f60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.459 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28411c9190>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2843372f60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.459 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be9c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2843372f60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.459 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bf1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2843372f60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.459 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bec00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2843372f60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.459 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bf440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2843372f60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.459 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bec60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2843372f60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.459 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2842f83560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2843372f60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.459 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2843372f60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.459 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc5c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2843372f60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.460 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2843372f60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.460 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be660>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2843372f60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.460 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2843372f60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.460 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2843372f60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.460 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2842f1af60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2843372f60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.461 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2843372f60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.461 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be7b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2843372f60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.463 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '55846fbf-a87a-4cba-be0b-23125d3d9ef4', 'name': 'test_0', 'flavor': {'id': 'f2c5c5dd-a580-4885-a3ab-a766eac401c8', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'c5833e41-b4db-454e-8f49-014aa18c7dc5'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'bd16a0de2f5e4a8480a855ef0e1a3f14', 'user_id': 'd9858533c2284846a8f0f19a1fb45045', 'hostId': '47f89b8956aaa9163f724166aabd4216eadbb2bd951d24f4c87e1ecb', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.466 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e', 'name': 'vn-i4gqh4k-nwnahxa6hq2y-lqyj7kfebyqq-vnf-dcwk4osqlplv', 'flavor': {'id': 'f2c5c5dd-a580-4885-a3ab-a766eac401c8', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'c5833e41-b4db-454e-8f49-014aa18c7dc5'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000003', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'bd16a0de2f5e4a8480a855ef0e1a3f14', 'user_id': 'd9858533c2284846a8f0f19a1fb45045', 'hostId': '47f89b8956aaa9163f724166aabd4216eadbb2bd951d24f4c87e1ecb', 'status': 'active', 'metadata': {'metering.server_group': '500baa09-1e39-474e-b275-8b2dffe3a65b'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.469 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'e9de5be9-383e-4139-a192-9a00ac9030d0', 'name': 'vn-i4gqh4k-b64ilmmiw3co-dxxhdi3z36fs-vnf-e3wngllyc55g', 'flavor': {'id': 'f2c5c5dd-a580-4885-a3ab-a766eac401c8', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'c5833e41-b4db-454e-8f49-014aa18c7dc5'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'bd16a0de2f5e4a8480a855ef0e1a3f14', 'user_id': 'd9858533c2284846a8f0f19a1fb45045', 'hostId': '47f89b8956aaa9163f724166aabd4216eadbb2bd951d24f4c87e1ecb', 'status': 'active', 'metadata': {'metering.server_group': '500baa09-1e39-474e-b275-8b2dffe3a65b'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.469 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.469 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bc800>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.469 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bc800>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.470 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.470 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-01-23T12:01:01.469947) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.475 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.479 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.482 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.483 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.483 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f28410be7e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.483 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.483 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410be810>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.483 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410be810>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.484 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.484 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-01-23T12:01:01.483993) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.506 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.506 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.507 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.534 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.534 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.534 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.558 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/disk.device.usage volume: 21364736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.559 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.559 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.560 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.560 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f28411c9b80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.560 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.560 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410be840>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.560 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410be840>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.560 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.561 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-01-23T12:01:01.560880) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.617 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.617 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.617 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.674 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.675 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.675 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.731 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/disk.device.write.bytes volume: 41861120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.731 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.732 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.732 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.732 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f28410bc830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.732 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.732 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f28410be870>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.733 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.733 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410be8a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.733 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410be8a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.733 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.733 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.write.latency volume: 1669208630 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.733 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-01-23T12:01:01.733363) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.733 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.write.latency volume: 8106790 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.734 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.734 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.device.write.latency volume: 1850558272 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.734 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.device.write.latency volume: 8667328 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.735 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.735 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/disk.device.write.latency volume: 600800165 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.735 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/disk.device.write.latency volume: 7490744 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.735 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.736 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.736 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f28410bc8c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.736 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.736 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bc8f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.736 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bc8f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.736 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.737 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-01-23T12:01:01.736853) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.737 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.737 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.737 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.738 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.738 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f28410be8d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.738 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.738 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410be900>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.738 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410be900>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.738 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.738 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.write.requests volume: 234 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.739 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-01-23T12:01:01.738763) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.739 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.739 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.739 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.device.write.requests volume: 234 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.740 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.740 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.740 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/disk.device.write.requests volume: 236 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.740 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.741 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.741 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.741 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f28410bef30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.741 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.741 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bf140>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.742 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bf140>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.742 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.742 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.742 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-01-23T12:01:01.742176) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.742 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.743 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.743 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.743 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f28410be930>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.743 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.743 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410be960>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.743 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410be960>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.744 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.744 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.745 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f28410be750>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.745 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.745 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-01-23T12:01:01.744059) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.745 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f2842f61190>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.745 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f2842f61190>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.745 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.745 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.read.latency volume: 639933059 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.745 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.read.latency volume: 72530295 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.746 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.read.latency volume: 43879093 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.746 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.device.read.latency volume: 374273377 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.746 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.device.read.latency volume: 71332104 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.746 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-01-23T12:01:01.745555) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.746 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.device.read.latency volume: 53834488 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.747 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/disk.device.read.latency volume: 327509499 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.747 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/disk.device.read.latency volume: 57556257 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.747 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/disk.device.read.latency volume: 50069079 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.747 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.748 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f28411a4c50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.748 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.748 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28411c9190>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.748 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28411c9190>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.748 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.748 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.748 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.749 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-01-23T12:01:01.748397) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.749 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.749 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.749 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.749 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.749 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.750 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.750 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.751 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.751 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f28410be990>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.751 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.751 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410be9c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.751 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410be9c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.751 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.752 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.752 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f28410bf1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.752 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.752 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bf1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.752 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bf1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.752 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.752 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.752 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.753 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.753 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.753 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f28410bebd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.753 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-01-23T12:01:01.751644) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.753 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-01-23T12:01:01.752660) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.753 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.754 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bec00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.754 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bec00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.754 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.754 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-01-23T12:01:01.754140) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.772 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/memory.usage volume: 48.76171875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:01:01 compute-0 CROND[243756]: (root) CMD (run-parts /etc/cron.hourly)
Jan 23 12:01:01 compute-0 run-parts[243759]: (/etc/cron.hourly) starting 0anacron
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.802 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/memory.usage volume: 48.953125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:01:01 compute-0 run-parts[243765]: (/etc/cron.hourly) finished 0anacron
Jan 23 12:01:01 compute-0 CROND[243755]: (root) CMDEND (run-parts /etc/cron.hourly)
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.821 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/memory.usage volume: 49.01171875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.821 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.822 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f28410bf410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.822 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.822 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bf440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.822 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bf440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.822 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-01-23T12:01:01.822402) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.822 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.823 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.823 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.823 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.823 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.824 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f28410bec30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.824 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.824 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bec60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.824 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bec60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.824 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-01-23T12:01:01.824487) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.824 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.825 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/network.incoming.bytes volume: 2220 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.825 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/network.incoming.bytes volume: 1654 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.825 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/network.incoming.bytes volume: 1570 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.825 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.825 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f28410bcfb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.826 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.826 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f2842f83560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.826 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f2842f83560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.826 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-01-23T12:01:01.826254) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.826 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.826 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/cpu volume: 41220000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.827 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/cpu volume: 35840000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.827 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/cpu volume: 31980000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.827 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.827 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f28410bc920>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.827 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.827 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f28410bc5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.827 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.828 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bc5c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.828 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bc5c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.828 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-01-23T12:01:01.828183) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.828 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.828 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/network.incoming.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.829 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/network.incoming.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.829 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/network.incoming.packets volume: 14 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.829 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.829 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f28410bc890>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.829 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.829 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bc650>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.829 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bc650>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.830 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-01-23T12:01:01.829885) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.830 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.830 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.830 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.830 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.831 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.831 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f28410be720>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.831 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.831 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410be660>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.831 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410be660>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.832 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-01-23T12:01:01.831675) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.831 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.832 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.832 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.832 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.832 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.833 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.833 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.833 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.833 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.833 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.834 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.834 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f28410bc6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.834 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.834 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bc680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.834 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bc680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.835 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-01-23T12:01:01.834920) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.835 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.835 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.835 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.836 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/network.outgoing.packets volume: 21 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.836 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.836 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f28410bec90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.836 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.836 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bc6e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.836 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bc6e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.837 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-01-23T12:01:01.836760) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.836 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.837 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.837 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.837 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.838 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.838 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f284322b260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.838 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.838 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f2842f1af60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.838 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f2842f1af60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.838 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-01-23T12:01:01.838497) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.838 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.839 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.839 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.839 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.839 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.839 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.840 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.840 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.841 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.841 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.842 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.842 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f28410bc740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.842 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.842 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bc770>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.842 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bc770>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.843 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-01-23T12:01:01.842703) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.842 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.843 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/network.outgoing.bytes volume: 2342 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.843 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/network.outgoing.bytes volume: 2356 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.843 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/network.outgoing.bytes volume: 2286 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.843 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.844 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f28410be780>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.844 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.844 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410be7b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.844 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410be7b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.844 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-01-23T12:01:01.844392) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.844 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.844 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.845 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.845 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.845 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.845 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.845 14 DEBUG ceilometer.compute.pollsters [-] ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.846 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.846 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.846 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.846 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.847 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.847 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.847 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.847 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.847 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.847 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.848 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.848 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.848 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.848 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.848 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.848 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.848 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.848 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.848 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.848 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.848 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.848 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.848 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.849 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.849 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.849 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.849 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.849 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.849 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:01:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:01:01.849 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:01:02 compute-0 nova_compute[185173]: 2026-01-23 12:01:02.697 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:01:06 compute-0 nova_compute[185173]: 2026-01-23 12:01:06.241 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:01:07 compute-0 nova_compute[185173]: 2026-01-23 12:01:07.698 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:01:07 compute-0 podman[243766]: 2026-01-23 12:01:07.748020856 +0000 UTC m=+0.068406594 container health_status 48bfd3e93cfb033a8917f154ab637a84f3f60f7609564292c230ce848bae7693 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 23 12:01:07 compute-0 podman[243790]: 2026-01-23 12:01:07.842379703 +0000 UTC m=+0.069044029 container health_status 6ec039018dddd109dd56b3f3912ce4a80c166b5fb98c417c5e3cfbbdfbfbeaad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.build-date=20260120, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_compute)
Jan 23 12:01:07 compute-0 podman[243791]: 2026-01-23 12:01:07.853459704 +0000 UTC m=+0.075593560 container health_status d96827cd9c29e53bbdf4cef10942608e4ba405294733072b4aa624c0238e2ed8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 23 12:01:11 compute-0 nova_compute[185173]: 2026-01-23 12:01:11.244 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:01:12 compute-0 nova_compute[185173]: 2026-01-23 12:01:12.700 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:01:12 compute-0 podman[243825]: 2026-01-23 12:01:12.781849 +0000 UTC m=+0.106432613 container health_status 1cc877fed4914980324cf4c0d6ba23743fd113442cee4d49cc1a59e402757170 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 12:01:13 compute-0 nova_compute[185173]: 2026-01-23 12:01:13.235 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:01:16 compute-0 nova_compute[185173]: 2026-01-23 12:01:16.245 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:01:17 compute-0 nova_compute[185173]: 2026-01-23 12:01:17.703 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:01:17 compute-0 podman[243850]: 2026-01-23 12:01:17.731364085 +0000 UTC m=+0.062993188 container health_status adf529ba1b6aae11f18bcfacdd7f5850af0b6e6af2250d4a705be9c346f3f5af (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Jan 23 12:01:20 compute-0 podman[243871]: 2026-01-23 12:01:20.763145331 +0000 UTC m=+0.098150667 container health_status 900ef841977ab427bb05b895d10e0cac749b9185cccc7bb7aaf2b3886aa6449a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.4, config_id=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, vcs-type=git, architecture=x86_64, managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., release=1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Jan 23 12:01:21 compute-0 nova_compute[185173]: 2026-01-23 12:01:21.235 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:01:21 compute-0 nova_compute[185173]: 2026-01-23 12:01:21.236 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:01:21 compute-0 nova_compute[185173]: 2026-01-23 12:01:21.248 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:01:21 compute-0 nova_compute[185173]: 2026-01-23 12:01:21.269 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 12:01:21 compute-0 nova_compute[185173]: 2026-01-23 12:01:21.270 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 12:01:21 compute-0 nova_compute[185173]: 2026-01-23 12:01:21.270 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 12:01:21 compute-0 nova_compute[185173]: 2026-01-23 12:01:21.271 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 23 12:01:21 compute-0 nova_compute[185173]: 2026-01-23 12:01:21.386 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 12:01:21 compute-0 nova_compute[185173]: 2026-01-23 12:01:21.452 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 12:01:21 compute-0 nova_compute[185173]: 2026-01-23 12:01:21.452 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 12:01:21 compute-0 nova_compute[185173]: 2026-01-23 12:01:21.509 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 12:01:21 compute-0 nova_compute[185173]: 2026-01-23 12:01:21.510 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 12:01:21 compute-0 nova_compute[185173]: 2026-01-23 12:01:21.570 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.eph0 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 12:01:21 compute-0 nova_compute[185173]: 2026-01-23 12:01:21.571 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 12:01:21 compute-0 nova_compute[185173]: 2026-01-23 12:01:21.635 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.eph0 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 12:01:21 compute-0 nova_compute[185173]: 2026-01-23 12:01:21.641 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 12:01:21 compute-0 nova_compute[185173]: 2026-01-23 12:01:21.722 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 12:01:21 compute-0 nova_compute[185173]: 2026-01-23 12:01:21.723 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 12:01:21 compute-0 nova_compute[185173]: 2026-01-23 12:01:21.782 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 12:01:21 compute-0 nova_compute[185173]: 2026-01-23 12:01:21.783 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 12:01:21 compute-0 nova_compute[185173]: 2026-01-23 12:01:21.844 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.eph0 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 12:01:21 compute-0 nova_compute[185173]: 2026-01-23 12:01:21.845 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 12:01:21 compute-0 nova_compute[185173]: 2026-01-23 12:01:21.902 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e/disk.eph0 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 12:01:21 compute-0 nova_compute[185173]: 2026-01-23 12:01:21.909 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e9de5be9-383e-4139-a192-9a00ac9030d0/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 12:01:21 compute-0 nova_compute[185173]: 2026-01-23 12:01:21.987 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e9de5be9-383e-4139-a192-9a00ac9030d0/disk --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 12:01:21 compute-0 nova_compute[185173]: 2026-01-23 12:01:21.988 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e9de5be9-383e-4139-a192-9a00ac9030d0/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 12:01:22 compute-0 nova_compute[185173]: 2026-01-23 12:01:22.046 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e9de5be9-383e-4139-a192-9a00ac9030d0/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 12:01:22 compute-0 nova_compute[185173]: 2026-01-23 12:01:22.048 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e9de5be9-383e-4139-a192-9a00ac9030d0/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 12:01:22 compute-0 nova_compute[185173]: 2026-01-23 12:01:22.109 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e9de5be9-383e-4139-a192-9a00ac9030d0/disk.eph0 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 12:01:22 compute-0 nova_compute[185173]: 2026-01-23 12:01:22.110 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e9de5be9-383e-4139-a192-9a00ac9030d0/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 12:01:22 compute-0 nova_compute[185173]: 2026-01-23 12:01:22.168 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e9de5be9-383e-4139-a192-9a00ac9030d0/disk.eph0 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 12:01:22 compute-0 nova_compute[185173]: 2026-01-23 12:01:22.501 185177 WARNING nova.virt.libvirt.driver [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 23 12:01:22 compute-0 nova_compute[185173]: 2026-01-23 12:01:22.502 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4734MB free_disk=72.37800979614258GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 23 12:01:22 compute-0 nova_compute[185173]: 2026-01-23 12:01:22.502 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 12:01:22 compute-0 nova_compute[185173]: 2026-01-23 12:01:22.503 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 12:01:22 compute-0 nova_compute[185173]: 2026-01-23 12:01:22.625 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Instance 55846fbf-a87a-4cba-be0b-23125d3d9ef4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 23 12:01:22 compute-0 nova_compute[185173]: 2026-01-23 12:01:22.625 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Instance ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 23 12:01:22 compute-0 nova_compute[185173]: 2026-01-23 12:01:22.625 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Instance e9de5be9-383e-4139-a192-9a00ac9030d0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 23 12:01:22 compute-0 nova_compute[185173]: 2026-01-23 12:01:22.626 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 23 12:01:22 compute-0 nova_compute[185173]: 2026-01-23 12:01:22.626 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2048MB phys_disk=79GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 23 12:01:22 compute-0 nova_compute[185173]: 2026-01-23 12:01:22.704 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:01:22 compute-0 nova_compute[185173]: 2026-01-23 12:01:22.719 185177 DEBUG nova.compute.provider_tree [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Inventory has not changed in ProviderTree for provider: 77dd020c-2f5c-40b0-b660-8a95a28aabbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 23 12:01:22 compute-0 podman[243927]: 2026-01-23 12:01:22.738259794 +0000 UTC m=+0.060495698 container health_status 99ee297e6e25b500e7af118e58bbafc761d2fd7202cdfcf4c976c2a99866b5ef (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 23 12:01:22 compute-0 nova_compute[185173]: 2026-01-23 12:01:22.740 185177 DEBUG nova.scheduler.client.report [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Inventory has not changed for provider 77dd020c-2f5c-40b0-b660-8a95a28aabbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 23 12:01:22 compute-0 nova_compute[185173]: 2026-01-23 12:01:22.742 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 23 12:01:22 compute-0 nova_compute[185173]: 2026-01-23 12:01:22.742 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.239s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 12:01:23 compute-0 nova_compute[185173]: 2026-01-23 12:01:23.737 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:01:23 compute-0 nova_compute[185173]: 2026-01-23 12:01:23.738 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:01:23 compute-0 nova_compute[185173]: 2026-01-23 12:01:23.738 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 23 12:01:23 compute-0 nova_compute[185173]: 2026-01-23 12:01:23.934 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Acquiring lock "refresh_cache-e9de5be9-383e-4139-a192-9a00ac9030d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 23 12:01:23 compute-0 nova_compute[185173]: 2026-01-23 12:01:23.934 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Acquired lock "refresh_cache-e9de5be9-383e-4139-a192-9a00ac9030d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 23 12:01:23 compute-0 nova_compute[185173]: 2026-01-23 12:01:23.934 185177 DEBUG nova.network.neutron [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] [instance: e9de5be9-383e-4139-a192-9a00ac9030d0] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 23 12:01:26 compute-0 nova_compute[185173]: 2026-01-23 12:01:26.253 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:01:27 compute-0 nova_compute[185173]: 2026-01-23 12:01:27.319 185177 DEBUG nova.network.neutron [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] [instance: e9de5be9-383e-4139-a192-9a00ac9030d0] Updating instance_info_cache with network_info: [{"id": "e0cab06b-811c-4fd7-a9ec-dded37a5bfcf", "address": "fa:16:3e:c3:4d:2b", "network": {"id": "9d2c33ef-0f52-43b5-80dd-899657aece53", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.35", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bd16a0de2f5e4a8480a855ef0e1a3f14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0cab06b-81", "ovs_interfaceid": "e0cab06b-811c-4fd7-a9ec-dded37a5bfcf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 23 12:01:27 compute-0 nova_compute[185173]: 2026-01-23 12:01:27.340 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Releasing lock "refresh_cache-e9de5be9-383e-4139-a192-9a00ac9030d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 23 12:01:27 compute-0 nova_compute[185173]: 2026-01-23 12:01:27.340 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] [instance: e9de5be9-383e-4139-a192-9a00ac9030d0] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 23 12:01:27 compute-0 nova_compute[185173]: 2026-01-23 12:01:27.341 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:01:27 compute-0 nova_compute[185173]: 2026-01-23 12:01:27.341 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:01:27 compute-0 nova_compute[185173]: 2026-01-23 12:01:27.342 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:01:27 compute-0 nova_compute[185173]: 2026-01-23 12:01:27.342 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:01:27 compute-0 nova_compute[185173]: 2026-01-23 12:01:27.342 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 23 12:01:27 compute-0 nova_compute[185173]: 2026-01-23 12:01:27.708 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:01:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:01:29.108 106832 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 12:01:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:01:29.109 106832 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 12:01:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:01:29.110 106832 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 12:01:29 compute-0 podman[201022]: time="2026-01-23T12:01:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 23 12:01:29 compute-0 podman[201022]: @ - - [23/Jan/2026:12:01:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28508 "" "Go-http-client/1.1"
Jan 23 12:01:29 compute-0 podman[201022]: @ - - [23/Jan/2026:12:01:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4375 "" "Go-http-client/1.1"
Jan 23 12:01:30 compute-0 podman[243952]: 2026-01-23 12:01:30.769911498 +0000 UTC m=+0.089765573 container health_status cde20f10ae383cce1365a41265bac0a75ea71c31a21a1539f187bef9d678e8d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., release=1755695350, version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, config_id=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.buildah.version=1.33.7, name=ubi9-minimal, vcs-type=git, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, architecture=x86_64, com.redhat.component=ubi9-minimal-container, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Jan 23 12:01:31 compute-0 nova_compute[185173]: 2026-01-23 12:01:31.256 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:01:31 compute-0 openstack_network_exporter[204160]: ERROR   12:01:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 23 12:01:31 compute-0 openstack_network_exporter[204160]: 
Jan 23 12:01:31 compute-0 openstack_network_exporter[204160]: ERROR   12:01:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 23 12:01:31 compute-0 openstack_network_exporter[204160]: 
Jan 23 12:01:32 compute-0 nova_compute[185173]: 2026-01-23 12:01:32.711 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:01:36 compute-0 nova_compute[185173]: 2026-01-23 12:01:36.259 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:01:37 compute-0 nova_compute[185173]: 2026-01-23 12:01:37.714 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:01:38 compute-0 podman[243973]: 2026-01-23 12:01:38.738527043 +0000 UTC m=+0.065029428 container health_status d96827cd9c29e53bbdf4cef10942608e4ba405294733072b4aa624c0238e2ed8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 23 12:01:38 compute-0 podman[243972]: 2026-01-23 12:01:38.741457214 +0000 UTC m=+0.070883351 container health_status 6ec039018dddd109dd56b3f3912ce4a80c166b5fb98c417c5e3cfbbdfbfbeaad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260120, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image)
Jan 23 12:01:38 compute-0 podman[243971]: 2026-01-23 12:01:38.753115619 +0000 UTC m=+0.086523323 container health_status 48bfd3e93cfb033a8917f154ab637a84f3f60f7609564292c230ce848bae7693 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Jan 23 12:01:41 compute-0 nova_compute[185173]: 2026-01-23 12:01:41.262 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:01:42 compute-0 nova_compute[185173]: 2026-01-23 12:01:42.716 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:01:43 compute-0 podman[244031]: 2026-01-23 12:01:43.771409369 +0000 UTC m=+0.107905975 container health_status 1cc877fed4914980324cf4c0d6ba23743fd113442cee4d49cc1a59e402757170 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.build-date=20251202)
Jan 23 12:01:46 compute-0 nova_compute[185173]: 2026-01-23 12:01:46.264 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:01:47 compute-0 nova_compute[185173]: 2026-01-23 12:01:47.718 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:01:48 compute-0 podman[244056]: 2026-01-23 12:01:48.76463461 +0000 UTC m=+0.091045453 container health_status adf529ba1b6aae11f18bcfacdd7f5850af0b6e6af2250d4a705be9c346f3f5af (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 23 12:01:51 compute-0 nova_compute[185173]: 2026-01-23 12:01:51.266 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:01:51 compute-0 podman[244077]: 2026-01-23 12:01:51.729631264 +0000 UTC m=+0.065805167 container health_status 900ef841977ab427bb05b895d10e0cac749b9185cccc7bb7aaf2b3886aa6449a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, architecture=x86_64, container_name=kepler, vendor=Red Hat, Inc., version=9.4, release-0.7.12=, io.openshift.expose-services=, build-date=2024-09-18T21:23:30, distribution-scope=public, io.openshift.tags=base rhel9, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1214.1726694543)
Jan 23 12:01:52 compute-0 nova_compute[185173]: 2026-01-23 12:01:52.719 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:01:53 compute-0 podman[244097]: 2026-01-23 12:01:53.72985916 +0000 UTC m=+0.064224058 container health_status 99ee297e6e25b500e7af118e58bbafc761d2fd7202cdfcf4c976c2a99866b5ef (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 23 12:01:56 compute-0 nova_compute[185173]: 2026-01-23 12:01:56.268 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:01:57 compute-0 nova_compute[185173]: 2026-01-23 12:01:57.721 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:01:59 compute-0 podman[201022]: time="2026-01-23T12:01:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 23 12:01:59 compute-0 podman[201022]: @ - - [23/Jan/2026:12:01:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28508 "" "Go-http-client/1.1"
Jan 23 12:01:59 compute-0 podman[201022]: @ - - [23/Jan/2026:12:01:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4382 "" "Go-http-client/1.1"
Jan 23 12:02:01 compute-0 nova_compute[185173]: 2026-01-23 12:02:01.271 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:02:01 compute-0 openstack_network_exporter[204160]: ERROR   12:02:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 23 12:02:01 compute-0 openstack_network_exporter[204160]: 
Jan 23 12:02:01 compute-0 openstack_network_exporter[204160]: ERROR   12:02:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 23 12:02:01 compute-0 openstack_network_exporter[204160]: 
Jan 23 12:02:01 compute-0 podman[244121]: 2026-01-23 12:02:01.732545565 +0000 UTC m=+0.065578222 container health_status cde20f10ae383cce1365a41265bac0a75ea71c31a21a1539f187bef9d678e8d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, name=ubi9-minimal, vcs-type=git, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, config_id=openstack_network_exporter, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, distribution-scope=public, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Jan 23 12:02:02 compute-0 nova_compute[185173]: 2026-01-23 12:02:02.723 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:02:06 compute-0 nova_compute[185173]: 2026-01-23 12:02:06.273 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:02:07 compute-0 nova_compute[185173]: 2026-01-23 12:02:07.725 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:02:09 compute-0 podman[244144]: 2026-01-23 12:02:09.764913125 +0000 UTC m=+0.077474332 container health_status 6ec039018dddd109dd56b3f3912ce4a80c166b5fb98c417c5e3cfbbdfbfbeaad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260120)
Jan 23 12:02:09 compute-0 podman[244145]: 2026-01-23 12:02:09.772218134 +0000 UTC m=+0.074030818 container health_status d96827cd9c29e53bbdf4cef10942608e4ba405294733072b4aa624c0238e2ed8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 23 12:02:09 compute-0 podman[244143]: 2026-01-23 12:02:09.775038892 +0000 UTC m=+0.097899790 container health_status 48bfd3e93cfb033a8917f154ab637a84f3f60f7609564292c230ce848bae7693 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 23 12:02:11 compute-0 nova_compute[185173]: 2026-01-23 12:02:11.275 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:02:12 compute-0 nova_compute[185173]: 2026-01-23 12:02:12.727 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:02:13 compute-0 nova_compute[185173]: 2026-01-23 12:02:13.252 185177 DEBUG oslo_concurrency.lockutils [None req-a3a8e79d-c025-4469-81ce-007a5458c715 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Acquiring lock "ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 12:02:13 compute-0 nova_compute[185173]: 2026-01-23 12:02:13.252 185177 DEBUG oslo_concurrency.lockutils [None req-a3a8e79d-c025-4469-81ce-007a5458c715 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Lock "ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 12:02:13 compute-0 nova_compute[185173]: 2026-01-23 12:02:13.253 185177 DEBUG oslo_concurrency.lockutils [None req-a3a8e79d-c025-4469-81ce-007a5458c715 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Acquiring lock "ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 12:02:13 compute-0 nova_compute[185173]: 2026-01-23 12:02:13.253 185177 DEBUG oslo_concurrency.lockutils [None req-a3a8e79d-c025-4469-81ce-007a5458c715 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Lock "ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 12:02:13 compute-0 nova_compute[185173]: 2026-01-23 12:02:13.254 185177 DEBUG oslo_concurrency.lockutils [None req-a3a8e79d-c025-4469-81ce-007a5458c715 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Lock "ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 12:02:13 compute-0 nova_compute[185173]: 2026-01-23 12:02:13.256 185177 INFO nova.compute.manager [None req-a3a8e79d-c025-4469-81ce-007a5458c715 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e] Terminating instance
Jan 23 12:02:13 compute-0 nova_compute[185173]: 2026-01-23 12:02:13.257 185177 DEBUG nova.compute.manager [None req-a3a8e79d-c025-4469-81ce-007a5458c715 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 23 12:02:13 compute-0 kernel: tapb9b63bb2-5f (unregistering): left promiscuous mode
Jan 23 12:02:13 compute-0 NetworkManager[56133]: <info>  [1769169733.3083] device (tapb9b63bb2-5f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 23 12:02:13 compute-0 ovn_controller[97581]: 2026-01-23T12:02:13Z|00054|binding|INFO|Releasing lport b9b63bb2-5fc6-48b1-8945-ac43ce6e954e from this chassis (sb_readonly=0)
Jan 23 12:02:13 compute-0 nova_compute[185173]: 2026-01-23 12:02:13.316 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:02:13 compute-0 ovn_controller[97581]: 2026-01-23T12:02:13Z|00055|binding|INFO|Setting lport b9b63bb2-5fc6-48b1-8945-ac43ce6e954e down in Southbound
Jan 23 12:02:13 compute-0 ovn_controller[97581]: 2026-01-23T12:02:13Z|00056|binding|INFO|Removing iface tapb9b63bb2-5f ovn-installed in OVS
Jan 23 12:02:13 compute-0 nova_compute[185173]: 2026-01-23 12:02:13.324 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:02:13 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:02:13.329 106832 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fa:bc:bc 192.168.0.99'], port_security=['fa:16:3e:fa:bc:bc 192.168.0.99'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-wvvtbi4gqh4k-nwnahxa6hq2y-lqyj7kfebyqq-port-bzb33egd64ru', 'neutron:cidrs': '192.168.0.99/24', 'neutron:device_id': 'ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9d2c33ef-0f52-43b5-80dd-899657aece53', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-wvvtbi4gqh4k-nwnahxa6hq2y-lqyj7kfebyqq-port-bzb33egd64ru', 'neutron:project_id': 'bd16a0de2f5e4a8480a855ef0e1a3f14', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd2fa655b-b17a-4411-ab93-c6585edc77dc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.192', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=488b21ee-cabd-4ebf-9089-c8262ea2e5e6, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fceaba80790>], logical_port=b9b63bb2-5fc6-48b1-8945-ac43ce6e954e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fceaba80790>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 23 12:02:13 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:02:13.330 106832 INFO neutron.agent.ovn.metadata.agent [-] Port b9b63bb2-5fc6-48b1-8945-ac43ce6e954e in datapath 9d2c33ef-0f52-43b5-80dd-899657aece53 unbound from our chassis
Jan 23 12:02:13 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:02:13.331 106832 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9d2c33ef-0f52-43b5-80dd-899657aece53
Jan 23 12:02:13 compute-0 nova_compute[185173]: 2026-01-23 12:02:13.333 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:02:13 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:02:13.353 238267 DEBUG oslo.privsep.daemon [-] privsep: reply[a2163008-0573-42b3-937d-4ed5d70b2ce6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 12:02:13 compute-0 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000003.scope: Deactivated successfully.
Jan 23 12:02:13 compute-0 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000003.scope: Consumed 1min 21.798s CPU time.
Jan 23 12:02:13 compute-0 systemd-machined[156550]: Machine qemu-3-instance-00000003 terminated.
Jan 23 12:02:13 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:02:13.385 238300 DEBUG oslo.privsep.daemon [-] privsep: reply[2f4a8ac7-8d75-432b-bbe3-0cf69ec38491]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 12:02:13 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:02:13.389 238300 DEBUG oslo.privsep.daemon [-] privsep: reply[8f5a2fed-d8ca-4981-ab23-232945e5350f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 12:02:13 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:02:13.419 238300 DEBUG oslo.privsep.daemon [-] privsep: reply[7de5eb9c-78c5-4f5b-bab6-fafa508e2ccf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 12:02:13 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:02:13.436 238267 DEBUG oslo.privsep.daemon [-] privsep: reply[caf2e50c-6394-4b8e-bc52-0dd92bb0524a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9d2c33ef-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5b:a6:26'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 14, 'rx_bytes': 658, 'tx_bytes': 780, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 14, 'rx_bytes': 658, 'tx_bytes': 780, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 374776, 'reachable_time': 37086, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 244214, 'error': None, 'target': 'ovnmeta-9d2c33ef-0f52-43b5-80dd-899657aece53', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 12:02:13 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:02:13.453 238267 DEBUG oslo.privsep.daemon [-] privsep: reply[0ad012f4-bd21-4372-b942-b2e6e74e2d26]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap9d2c33ef-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 374787, 'tstamp': 374787}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 244215, 'error': None, 'target': 'ovnmeta-9d2c33ef-0f52-43b5-80dd-899657aece53', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap9d2c33ef-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 374789, 'tstamp': 374789}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 244215, 'error': None, 'target': 'ovnmeta-9d2c33ef-0f52-43b5-80dd-899657aece53', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 12:02:13 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:02:13.455 106832 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9d2c33ef-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 23 12:02:13 compute-0 nova_compute[185173]: 2026-01-23 12:02:13.457 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:02:13 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:02:13.463 106832 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9d2c33ef-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 23 12:02:13 compute-0 nova_compute[185173]: 2026-01-23 12:02:13.463 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:02:13 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:02:13.463 106832 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 23 12:02:13 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:02:13.463 106832 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9d2c33ef-00, col_values=(('external_ids', {'iface-id': 'a3c84d66-2ae2-461a-92f2-b9999c7b469e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 23 12:02:13 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:02:13.464 106832 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 23 12:02:13 compute-0 nova_compute[185173]: 2026-01-23 12:02:13.544 185177 INFO nova.virt.libvirt.driver [-] [instance: ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e] Instance destroyed successfully.
Jan 23 12:02:13 compute-0 nova_compute[185173]: 2026-01-23 12:02:13.545 185177 DEBUG nova.objects.instance [None req-a3a8e79d-c025-4469-81ce-007a5458c715 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Lazy-loading 'resources' on Instance uuid ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 23 12:02:13 compute-0 nova_compute[185173]: 2026-01-23 12:02:13.561 185177 DEBUG nova.virt.libvirt.vif [None req-a3a8e79d-c025-4469-81ce-007a5458c715 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-23T11:54:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='vn-i4gqh4k-nwnahxa6hq2y-lqyj7kfebyqq-vnf-dcwk4osqlplv',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-i4gqh4k-nwnahxa6hq2y-lqyj7kfebyqq-vnf-dcwk4osqlplv',id=3,image_ref='c5833e41-b4db-454e-8f49-014aa18c7dc5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-23T11:54:21Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='500baa09-1e39-474e-b275-8b2dffe3a65b'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='bd16a0de2f5e4a8480a855ef0e1a3f14',ramdisk_id='',reservation_id='r-l5s1i5hh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,admin,reader',image_base_image_ref='c5833e41-b4db-454e-8f49-014aa18c7dc5',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-23T11:54:21Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT01OTQxNzU4MzMyNTQ1NjEyNjcxPT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTU5NDE3NTgzMzI1NDU2MTI2NzE9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09NTk0MTc1ODMzMjU0NTYxMjY3MT09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTU5NDE3NTgzMzI1NDU2MTI2NzE9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT01OTQxNzU4MzMyNTQ1NjEyNjcxPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT01OTQxNzU4MzMyNTQ1NjEyNjcxPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvK
Jan 23 12:02:13 compute-0 nova_compute[185173]: Cclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09NTk0MTc1ODMzMjU0NTYxMjY3MT09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTU5NDE3NTgzMzI1NDU2MTI2NzE9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT01OTQxNzU4MzMyNTQ1NjEyNjcxPT0tLQo=',user_id='d9858533c2284846a8f0f19a1fb45045',uuid=ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b9b63bb2-5fc6-48b1-8945-ac43ce6e954e", "address": "fa:16:3e:fa:bc:bc", "network": {"id": "9d2c33ef-0f52-43b5-80dd-899657aece53", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.99", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bd16a0de2f5e4a8480a855ef0e1a3f14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9b63bb2-5f", "ovs_interfaceid": "b9b63bb2-5fc6-48b1-8945-ac43ce6e954e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 23 12:02:13 compute-0 nova_compute[185173]: 2026-01-23 12:02:13.561 185177 DEBUG nova.network.os_vif_util [None req-a3a8e79d-c025-4469-81ce-007a5458c715 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Converting VIF {"id": "b9b63bb2-5fc6-48b1-8945-ac43ce6e954e", "address": "fa:16:3e:fa:bc:bc", "network": {"id": "9d2c33ef-0f52-43b5-80dd-899657aece53", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.99", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bd16a0de2f5e4a8480a855ef0e1a3f14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9b63bb2-5f", "ovs_interfaceid": "b9b63bb2-5fc6-48b1-8945-ac43ce6e954e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 23 12:02:13 compute-0 nova_compute[185173]: 2026-01-23 12:02:13.562 185177 DEBUG nova.network.os_vif_util [None req-a3a8e79d-c025-4469-81ce-007a5458c715 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:fa:bc:bc,bridge_name='br-int',has_traffic_filtering=True,id=b9b63bb2-5fc6-48b1-8945-ac43ce6e954e,network=Network(9d2c33ef-0f52-43b5-80dd-899657aece53),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapb9b63bb2-5f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 23 12:02:13 compute-0 nova_compute[185173]: 2026-01-23 12:02:13.562 185177 DEBUG os_vif [None req-a3a8e79d-c025-4469-81ce-007a5458c715 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:fa:bc:bc,bridge_name='br-int',has_traffic_filtering=True,id=b9b63bb2-5fc6-48b1-8945-ac43ce6e954e,network=Network(9d2c33ef-0f52-43b5-80dd-899657aece53),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapb9b63bb2-5f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 23 12:02:13 compute-0 nova_compute[185173]: 2026-01-23 12:02:13.563 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:02:13 compute-0 nova_compute[185173]: 2026-01-23 12:02:13.564 185177 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb9b63bb2-5f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 23 12:02:13 compute-0 nova_compute[185173]: 2026-01-23 12:02:13.565 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:02:13 compute-0 nova_compute[185173]: 2026-01-23 12:02:13.567 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:02:13 compute-0 nova_compute[185173]: 2026-01-23 12:02:13.570 185177 INFO os_vif [None req-a3a8e79d-c025-4469-81ce-007a5458c715 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:fa:bc:bc,bridge_name='br-int',has_traffic_filtering=True,id=b9b63bb2-5fc6-48b1-8945-ac43ce6e954e,network=Network(9d2c33ef-0f52-43b5-80dd-899657aece53),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapb9b63bb2-5f')
Jan 23 12:02:13 compute-0 nova_compute[185173]: 2026-01-23 12:02:13.571 185177 INFO nova.virt.libvirt.driver [None req-a3a8e79d-c025-4469-81ce-007a5458c715 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e] Deleting instance files /var/lib/nova/instances/ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e_del
Jan 23 12:02:13 compute-0 nova_compute[185173]: 2026-01-23 12:02:13.572 185177 INFO nova.virt.libvirt.driver [None req-a3a8e79d-c025-4469-81ce-007a5458c715 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e] Deletion of /var/lib/nova/instances/ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e_del complete
Jan 23 12:02:13 compute-0 nova_compute[185173]: 2026-01-23 12:02:13.630 185177 INFO nova.compute.manager [None req-a3a8e79d-c025-4469-81ce-007a5458c715 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e] Took 0.37 seconds to destroy the instance on the hypervisor.
Jan 23 12:02:13 compute-0 nova_compute[185173]: 2026-01-23 12:02:13.630 185177 DEBUG oslo.service.loopingcall [None req-a3a8e79d-c025-4469-81ce-007a5458c715 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 23 12:02:13 compute-0 nova_compute[185173]: 2026-01-23 12:02:13.630 185177 DEBUG nova.compute.manager [-] [instance: ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 23 12:02:13 compute-0 nova_compute[185173]: 2026-01-23 12:02:13.631 185177 DEBUG nova.network.neutron [-] [instance: ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 23 12:02:13 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:02:13.810 106832 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:21:44', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '86:2e:09:c4:2a:53'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 23 12:02:13 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:02:13.811 106832 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 23 12:02:13 compute-0 rsyslogd[235472]: message too long (8192) with configured size 8096, begin of message is: 2026-01-23 12:02:13.561 185177 DEBUG nova.virt.libvirt.vif [None req-a3a8e79d-c0 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 23 12:02:13 compute-0 nova_compute[185173]: 2026-01-23 12:02:13.813 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:02:13 compute-0 nova_compute[185173]: 2026-01-23 12:02:13.915 185177 DEBUG nova.compute.manager [req-30c93600-f2ed-4d63-aa4c-c854b25d8dcc req-16f6bacc-bc86-466e-bd07-d7ee7d26e576 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] [instance: ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e] Received event network-vif-unplugged-b9b63bb2-5fc6-48b1-8945-ac43ce6e954e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 23 12:02:13 compute-0 nova_compute[185173]: 2026-01-23 12:02:13.915 185177 DEBUG oslo_concurrency.lockutils [req-30c93600-f2ed-4d63-aa4c-c854b25d8dcc req-16f6bacc-bc86-466e-bd07-d7ee7d26e576 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] Acquiring lock "ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 12:02:13 compute-0 nova_compute[185173]: 2026-01-23 12:02:13.916 185177 DEBUG oslo_concurrency.lockutils [req-30c93600-f2ed-4d63-aa4c-c854b25d8dcc req-16f6bacc-bc86-466e-bd07-d7ee7d26e576 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] Lock "ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 12:02:13 compute-0 nova_compute[185173]: 2026-01-23 12:02:13.916 185177 DEBUG oslo_concurrency.lockutils [req-30c93600-f2ed-4d63-aa4c-c854b25d8dcc req-16f6bacc-bc86-466e-bd07-d7ee7d26e576 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] Lock "ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 12:02:13 compute-0 nova_compute[185173]: 2026-01-23 12:02:13.917 185177 DEBUG nova.compute.manager [req-30c93600-f2ed-4d63-aa4c-c854b25d8dcc req-16f6bacc-bc86-466e-bd07-d7ee7d26e576 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] [instance: ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e] No waiting events found dispatching network-vif-unplugged-b9b63bb2-5fc6-48b1-8945-ac43ce6e954e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 23 12:02:13 compute-0 nova_compute[185173]: 2026-01-23 12:02:13.917 185177 DEBUG nova.compute.manager [req-30c93600-f2ed-4d63-aa4c-c854b25d8dcc req-16f6bacc-bc86-466e-bd07-d7ee7d26e576 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] [instance: ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e] Received event network-vif-unplugged-b9b63bb2-5fc6-48b1-8945-ac43ce6e954e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 23 12:02:14 compute-0 nova_compute[185173]: 2026-01-23 12:02:14.307 185177 DEBUG nova.compute.manager [req-b727031c-536e-4918-967f-edd1d30e395e req-a0825426-da5d-41d0-8952-01e5e1f91987 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] [instance: ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e] Received event network-changed-b9b63bb2-5fc6-48b1-8945-ac43ce6e954e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 23 12:02:14 compute-0 nova_compute[185173]: 2026-01-23 12:02:14.307 185177 DEBUG nova.compute.manager [req-b727031c-536e-4918-967f-edd1d30e395e req-a0825426-da5d-41d0-8952-01e5e1f91987 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] [instance: ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e] Refreshing instance network info cache due to event network-changed-b9b63bb2-5fc6-48b1-8945-ac43ce6e954e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 23 12:02:14 compute-0 nova_compute[185173]: 2026-01-23 12:02:14.308 185177 DEBUG oslo_concurrency.lockutils [req-b727031c-536e-4918-967f-edd1d30e395e req-a0825426-da5d-41d0-8952-01e5e1f91987 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] Acquiring lock "refresh_cache-ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 23 12:02:14 compute-0 nova_compute[185173]: 2026-01-23 12:02:14.308 185177 DEBUG oslo_concurrency.lockutils [req-b727031c-536e-4918-967f-edd1d30e395e req-a0825426-da5d-41d0-8952-01e5e1f91987 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] Acquired lock "refresh_cache-ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 23 12:02:14 compute-0 nova_compute[185173]: 2026-01-23 12:02:14.309 185177 DEBUG nova.network.neutron [req-b727031c-536e-4918-967f-edd1d30e395e req-a0825426-da5d-41d0-8952-01e5e1f91987 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] [instance: ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e] Refreshing network info cache for port b9b63bb2-5fc6-48b1-8945-ac43ce6e954e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 23 12:02:14 compute-0 podman[244238]: 2026-01-23 12:02:14.774661912 +0000 UTC m=+0.105815894 container health_status 1cc877fed4914980324cf4c0d6ba23743fd113442cee4d49cc1a59e402757170 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Jan 23 12:02:14 compute-0 nova_compute[185173]: 2026-01-23 12:02:14.953 185177 DEBUG nova.network.neutron [-] [instance: ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 23 12:02:14 compute-0 nova_compute[185173]: 2026-01-23 12:02:14.973 185177 INFO nova.compute.manager [-] [instance: ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e] Took 1.34 seconds to deallocate network for instance.
Jan 23 12:02:15 compute-0 nova_compute[185173]: 2026-01-23 12:02:15.017 185177 DEBUG oslo_concurrency.lockutils [None req-a3a8e79d-c025-4469-81ce-007a5458c715 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 12:02:15 compute-0 nova_compute[185173]: 2026-01-23 12:02:15.018 185177 DEBUG oslo_concurrency.lockutils [None req-a3a8e79d-c025-4469-81ce-007a5458c715 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 12:02:15 compute-0 nova_compute[185173]: 2026-01-23 12:02:15.144 185177 DEBUG nova.compute.provider_tree [None req-a3a8e79d-c025-4469-81ce-007a5458c715 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Inventory has not changed in ProviderTree for provider: 77dd020c-2f5c-40b0-b660-8a95a28aabbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 23 12:02:15 compute-0 nova_compute[185173]: 2026-01-23 12:02:15.163 185177 DEBUG nova.scheduler.client.report [None req-a3a8e79d-c025-4469-81ce-007a5458c715 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Inventory has not changed for provider 77dd020c-2f5c-40b0-b660-8a95a28aabbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 23 12:02:15 compute-0 nova_compute[185173]: 2026-01-23 12:02:15.184 185177 DEBUG oslo_concurrency.lockutils [None req-a3a8e79d-c025-4469-81ce-007a5458c715 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.166s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 12:02:15 compute-0 nova_compute[185173]: 2026-01-23 12:02:15.207 185177 INFO nova.scheduler.client.report [None req-a3a8e79d-c025-4469-81ce-007a5458c715 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Deleted allocations for instance ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e
Jan 23 12:02:15 compute-0 nova_compute[185173]: 2026-01-23 12:02:15.236 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:02:15 compute-0 nova_compute[185173]: 2026-01-23 12:02:15.272 185177 DEBUG oslo_concurrency.lockutils [None req-a3a8e79d-c025-4469-81ce-007a5458c715 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Lock "ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.020s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 12:02:15 compute-0 nova_compute[185173]: 2026-01-23 12:02:15.489 185177 DEBUG nova.network.neutron [req-b727031c-536e-4918-967f-edd1d30e395e req-a0825426-da5d-41d0-8952-01e5e1f91987 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] [instance: ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e] Updated VIF entry in instance network info cache for port b9b63bb2-5fc6-48b1-8945-ac43ce6e954e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 23 12:02:15 compute-0 nova_compute[185173]: 2026-01-23 12:02:15.490 185177 DEBUG nova.network.neutron [req-b727031c-536e-4918-967f-edd1d30e395e req-a0825426-da5d-41d0-8952-01e5e1f91987 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] [instance: ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e] Updating instance_info_cache with network_info: [{"id": "b9b63bb2-5fc6-48b1-8945-ac43ce6e954e", "address": "fa:16:3e:fa:bc:bc", "network": {"id": "9d2c33ef-0f52-43b5-80dd-899657aece53", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.99", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bd16a0de2f5e4a8480a855ef0e1a3f14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9b63bb2-5f", "ovs_interfaceid": "b9b63bb2-5fc6-48b1-8945-ac43ce6e954e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 23 12:02:15 compute-0 nova_compute[185173]: 2026-01-23 12:02:15.513 185177 DEBUG oslo_concurrency.lockutils [req-b727031c-536e-4918-967f-edd1d30e395e req-a0825426-da5d-41d0-8952-01e5e1f91987 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] Releasing lock "refresh_cache-ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 23 12:02:16 compute-0 nova_compute[185173]: 2026-01-23 12:02:16.005 185177 DEBUG nova.compute.manager [req-33759176-b876-4f50-aca6-448a2291dd09 req-67c7d0af-5002-4824-ba56-e6e177d8bf4a e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] [instance: ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e] Received event network-vif-plugged-b9b63bb2-5fc6-48b1-8945-ac43ce6e954e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 23 12:02:16 compute-0 nova_compute[185173]: 2026-01-23 12:02:16.005 185177 DEBUG oslo_concurrency.lockutils [req-33759176-b876-4f50-aca6-448a2291dd09 req-67c7d0af-5002-4824-ba56-e6e177d8bf4a e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] Acquiring lock "ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 12:02:16 compute-0 nova_compute[185173]: 2026-01-23 12:02:16.005 185177 DEBUG oslo_concurrency.lockutils [req-33759176-b876-4f50-aca6-448a2291dd09 req-67c7d0af-5002-4824-ba56-e6e177d8bf4a e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] Lock "ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 12:02:16 compute-0 nova_compute[185173]: 2026-01-23 12:02:16.006 185177 DEBUG oslo_concurrency.lockutils [req-33759176-b876-4f50-aca6-448a2291dd09 req-67c7d0af-5002-4824-ba56-e6e177d8bf4a e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] Lock "ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 12:02:16 compute-0 nova_compute[185173]: 2026-01-23 12:02:16.006 185177 DEBUG nova.compute.manager [req-33759176-b876-4f50-aca6-448a2291dd09 req-67c7d0af-5002-4824-ba56-e6e177d8bf4a e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] [instance: ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e] No waiting events found dispatching network-vif-plugged-b9b63bb2-5fc6-48b1-8945-ac43ce6e954e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 23 12:02:16 compute-0 nova_compute[185173]: 2026-01-23 12:02:16.006 185177 WARNING nova.compute.manager [req-33759176-b876-4f50-aca6-448a2291dd09 req-67c7d0af-5002-4824-ba56-e6e177d8bf4a e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] [instance: ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e] Received unexpected event network-vif-plugged-b9b63bb2-5fc6-48b1-8945-ac43ce6e954e for instance with vm_state deleted and task_state None.
Jan 23 12:02:17 compute-0 nova_compute[185173]: 2026-01-23 12:02:17.729 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:02:18 compute-0 nova_compute[185173]: 2026-01-23 12:02:18.567 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:02:19 compute-0 podman[244264]: 2026-01-23 12:02:19.751751655 +0000 UTC m=+0.077399291 container health_status adf529ba1b6aae11f18bcfacdd7f5850af0b6e6af2250d4a705be9c346f3f5af (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 23 12:02:19 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:02:19.813 106832 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9a136bfd-345f-428f-a7f6-d55531120214, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 23 12:02:22 compute-0 nova_compute[185173]: 2026-01-23 12:02:22.230 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:02:22 compute-0 nova_compute[185173]: 2026-01-23 12:02:22.234 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:02:22 compute-0 nova_compute[185173]: 2026-01-23 12:02:22.234 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:02:22 compute-0 nova_compute[185173]: 2026-01-23 12:02:22.235 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 23 12:02:22 compute-0 nova_compute[185173]: 2026-01-23 12:02:22.235 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:02:22 compute-0 nova_compute[185173]: 2026-01-23 12:02:22.260 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 12:02:22 compute-0 nova_compute[185173]: 2026-01-23 12:02:22.261 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 12:02:22 compute-0 nova_compute[185173]: 2026-01-23 12:02:22.261 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 12:02:22 compute-0 nova_compute[185173]: 2026-01-23 12:02:22.261 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 23 12:02:22 compute-0 nova_compute[185173]: 2026-01-23 12:02:22.358 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 12:02:22 compute-0 podman[244284]: 2026-01-23 12:02:22.416728993 +0000 UTC m=+0.092761914 container health_status 900ef841977ab427bb05b895d10e0cac749b9185cccc7bb7aaf2b3886aa6449a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, com.redhat.component=ubi9-container, name=ubi9, build-date=2024-09-18T21:23:30, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, io.buildah.version=1.29.0, release=1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, container_name=kepler, maintainer=Red Hat, Inc., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., managed_by=edpm_ansible, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4)
Jan 23 12:02:22 compute-0 nova_compute[185173]: 2026-01-23 12:02:22.424 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 12:02:22 compute-0 nova_compute[185173]: 2026-01-23 12:02:22.425 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 12:02:22 compute-0 nova_compute[185173]: 2026-01-23 12:02:22.492 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 12:02:22 compute-0 nova_compute[185173]: 2026-01-23 12:02:22.493 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 12:02:22 compute-0 nova_compute[185173]: 2026-01-23 12:02:22.554 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.eph0 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 12:02:22 compute-0 nova_compute[185173]: 2026-01-23 12:02:22.555 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 12:02:22 compute-0 nova_compute[185173]: 2026-01-23 12:02:22.629 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.eph0 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 12:02:22 compute-0 nova_compute[185173]: 2026-01-23 12:02:22.635 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e9de5be9-383e-4139-a192-9a00ac9030d0/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 12:02:22 compute-0 nova_compute[185173]: 2026-01-23 12:02:22.692 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e9de5be9-383e-4139-a192-9a00ac9030d0/disk --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 12:02:22 compute-0 nova_compute[185173]: 2026-01-23 12:02:22.693 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e9de5be9-383e-4139-a192-9a00ac9030d0/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 12:02:22 compute-0 nova_compute[185173]: 2026-01-23 12:02:22.732 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:02:22 compute-0 nova_compute[185173]: 2026-01-23 12:02:22.759 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e9de5be9-383e-4139-a192-9a00ac9030d0/disk --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 12:02:22 compute-0 nova_compute[185173]: 2026-01-23 12:02:22.761 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e9de5be9-383e-4139-a192-9a00ac9030d0/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 12:02:22 compute-0 nova_compute[185173]: 2026-01-23 12:02:22.815 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e9de5be9-383e-4139-a192-9a00ac9030d0/disk.eph0 --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 12:02:22 compute-0 nova_compute[185173]: 2026-01-23 12:02:22.816 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e9de5be9-383e-4139-a192-9a00ac9030d0/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 12:02:22 compute-0 nova_compute[185173]: 2026-01-23 12:02:22.876 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e9de5be9-383e-4139-a192-9a00ac9030d0/disk.eph0 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 12:02:23 compute-0 nova_compute[185173]: 2026-01-23 12:02:23.198 185177 WARNING nova.virt.libvirt.driver [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 23 12:02:23 compute-0 nova_compute[185173]: 2026-01-23 12:02:23.199 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4958MB free_disk=72.40055847167969GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 23 12:02:23 compute-0 nova_compute[185173]: 2026-01-23 12:02:23.199 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 12:02:23 compute-0 nova_compute[185173]: 2026-01-23 12:02:23.200 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 12:02:23 compute-0 nova_compute[185173]: 2026-01-23 12:02:23.301 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Instance 55846fbf-a87a-4cba-be0b-23125d3d9ef4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 23 12:02:23 compute-0 nova_compute[185173]: 2026-01-23 12:02:23.302 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Instance e9de5be9-383e-4139-a192-9a00ac9030d0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 23 12:02:23 compute-0 nova_compute[185173]: 2026-01-23 12:02:23.303 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 23 12:02:23 compute-0 nova_compute[185173]: 2026-01-23 12:02:23.303 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 23 12:02:23 compute-0 nova_compute[185173]: 2026-01-23 12:02:23.320 185177 DEBUG nova.scheduler.client.report [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Refreshing inventories for resource provider 77dd020c-2f5c-40b0-b660-8a95a28aabbd _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 23 12:02:23 compute-0 nova_compute[185173]: 2026-01-23 12:02:23.336 185177 DEBUG nova.scheduler.client.report [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Updating ProviderTree inventory for provider 77dd020c-2f5c-40b0-b660-8a95a28aabbd from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 23 12:02:23 compute-0 nova_compute[185173]: 2026-01-23 12:02:23.337 185177 DEBUG nova.compute.provider_tree [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Updating inventory in ProviderTree for provider 77dd020c-2f5c-40b0-b660-8a95a28aabbd with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 23 12:02:23 compute-0 nova_compute[185173]: 2026-01-23 12:02:23.350 185177 DEBUG nova.scheduler.client.report [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Refreshing aggregate associations for resource provider 77dd020c-2f5c-40b0-b660-8a95a28aabbd, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 23 12:02:23 compute-0 nova_compute[185173]: 2026-01-23 12:02:23.367 185177 DEBUG nova.scheduler.client.report [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Refreshing trait associations for resource provider 77dd020c-2f5c-40b0-b660-8a95a28aabbd, traits: HW_CPU_X86_F16C,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_IDE,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_TRUSTED_CERTS,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_BMI,HW_CPU_X86_FMA3,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE42,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_ABM,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_AVX2,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_AVX,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SSE,HW_CPU_X86_BMI2,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE4A,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_MMX,HW_CPU_X86_SSE41,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_USB _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 23 12:02:23 compute-0 nova_compute[185173]: 2026-01-23 12:02:23.476 185177 DEBUG nova.compute.provider_tree [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Inventory has not changed in ProviderTree for provider: 77dd020c-2f5c-40b0-b660-8a95a28aabbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 23 12:02:23 compute-0 nova_compute[185173]: 2026-01-23 12:02:23.497 185177 DEBUG nova.scheduler.client.report [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Inventory has not changed for provider 77dd020c-2f5c-40b0-b660-8a95a28aabbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 23 12:02:23 compute-0 nova_compute[185173]: 2026-01-23 12:02:23.523 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 23 12:02:23 compute-0 nova_compute[185173]: 2026-01-23 12:02:23.523 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.323s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 12:02:23 compute-0 nova_compute[185173]: 2026-01-23 12:02:23.569 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:02:24 compute-0 podman[244327]: 2026-01-23 12:02:24.732567453 +0000 UTC m=+0.061534144 container health_status 99ee297e6e25b500e7af118e58bbafc761d2fd7202cdfcf4c976c2a99866b5ef (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 23 12:02:25 compute-0 nova_compute[185173]: 2026-01-23 12:02:25.519 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:02:26 compute-0 nova_compute[185173]: 2026-01-23 12:02:26.206 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:02:26 compute-0 nova_compute[185173]: 2026-01-23 12:02:26.206 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 23 12:02:26 compute-0 nova_compute[185173]: 2026-01-23 12:02:26.206 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 23 12:02:27 compute-0 nova_compute[185173]: 2026-01-23 12:02:27.080 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Acquiring lock "refresh_cache-55846fbf-a87a-4cba-be0b-23125d3d9ef4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 23 12:02:27 compute-0 nova_compute[185173]: 2026-01-23 12:02:27.080 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Acquired lock "refresh_cache-55846fbf-a87a-4cba-be0b-23125d3d9ef4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 23 12:02:27 compute-0 nova_compute[185173]: 2026-01-23 12:02:27.081 185177 DEBUG nova.network.neutron [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] [instance: 55846fbf-a87a-4cba-be0b-23125d3d9ef4] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 23 12:02:27 compute-0 nova_compute[185173]: 2026-01-23 12:02:27.081 185177 DEBUG nova.objects.instance [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 55846fbf-a87a-4cba-be0b-23125d3d9ef4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 23 12:02:27 compute-0 nova_compute[185173]: 2026-01-23 12:02:27.739 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:02:28 compute-0 nova_compute[185173]: 2026-01-23 12:02:28.544 185177 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769169733.5395854, ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 23 12:02:28 compute-0 nova_compute[185173]: 2026-01-23 12:02:28.544 185177 INFO nova.compute.manager [-] [instance: ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e] VM Stopped (Lifecycle Event)
Jan 23 12:02:28 compute-0 nova_compute[185173]: 2026-01-23 12:02:28.567 185177 DEBUG nova.compute.manager [None req-197f3d2a-5288-4aa4-b281-c533d2f53f15 - - - - - -] [instance: ee2f2821-2dbd-4f58-ae7a-79e0a6fc740e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 23 12:02:28 compute-0 nova_compute[185173]: 2026-01-23 12:02:28.575 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:02:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:02:29.116 106832 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 12:02:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:02:29.117 106832 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 12:02:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:02:29.117 106832 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 12:02:29 compute-0 nova_compute[185173]: 2026-01-23 12:02:29.432 185177 DEBUG nova.network.neutron [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] [instance: 55846fbf-a87a-4cba-be0b-23125d3d9ef4] Updating instance_info_cache with network_info: [{"id": "4c18896b-ecf0-4d1b-b901-f24edce45c11", "address": "fa:16:3e:e4:21:a1", "network": {"id": "9d2c33ef-0f52-43b5-80dd-899657aece53", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.65", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bd16a0de2f5e4a8480a855ef0e1a3f14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c18896b-ec", "ovs_interfaceid": "4c18896b-ecf0-4d1b-b901-f24edce45c11", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 23 12:02:29 compute-0 nova_compute[185173]: 2026-01-23 12:02:29.446 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Releasing lock "refresh_cache-55846fbf-a87a-4cba-be0b-23125d3d9ef4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 23 12:02:29 compute-0 nova_compute[185173]: 2026-01-23 12:02:29.447 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] [instance: 55846fbf-a87a-4cba-be0b-23125d3d9ef4] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 23 12:02:29 compute-0 nova_compute[185173]: 2026-01-23 12:02:29.447 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:02:29 compute-0 nova_compute[185173]: 2026-01-23 12:02:29.447 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:02:29 compute-0 nova_compute[185173]: 2026-01-23 12:02:29.447 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:02:29 compute-0 podman[201022]: time="2026-01-23T12:02:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 23 12:02:29 compute-0 podman[201022]: @ - - [23/Jan/2026:12:02:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28508 "" "Go-http-client/1.1"
Jan 23 12:02:29 compute-0 podman[201022]: @ - - [23/Jan/2026:12:02:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4379 "" "Go-http-client/1.1"
Jan 23 12:02:31 compute-0 openstack_network_exporter[204160]: ERROR   12:02:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 23 12:02:31 compute-0 openstack_network_exporter[204160]: 
Jan 23 12:02:31 compute-0 openstack_network_exporter[204160]: ERROR   12:02:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 23 12:02:31 compute-0 openstack_network_exporter[204160]: 
Jan 23 12:02:32 compute-0 podman[244350]: 2026-01-23 12:02:32.7330508 +0000 UTC m=+0.065077080 container health_status cde20f10ae383cce1365a41265bac0a75ea71c31a21a1539f187bef9d678e8d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, io.buildah.version=1.33.7, vcs-type=git, com.redhat.component=ubi9-minimal-container, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, name=ubi9-minimal, version=9.6, config_id=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64)
Jan 23 12:02:32 compute-0 nova_compute[185173]: 2026-01-23 12:02:32.742 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:02:33 compute-0 nova_compute[185173]: 2026-01-23 12:02:33.578 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:02:37 compute-0 nova_compute[185173]: 2026-01-23 12:02:37.745 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:02:38 compute-0 nova_compute[185173]: 2026-01-23 12:02:38.581 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:02:40 compute-0 podman[244370]: 2026-01-23 12:02:40.751783038 +0000 UTC m=+0.073216749 container health_status 48bfd3e93cfb033a8917f154ab637a84f3f60f7609564292c230ce848bae7693 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Jan 23 12:02:40 compute-0 podman[244372]: 2026-01-23 12:02:40.758362568 +0000 UTC m=+0.072012039 container health_status d96827cd9c29e53bbdf4cef10942608e4ba405294733072b4aa624c0238e2ed8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Jan 23 12:02:40 compute-0 podman[244371]: 2026-01-23 12:02:40.760375327 +0000 UTC m=+0.078355613 container health_status 6ec039018dddd109dd56b3f3912ce4a80c166b5fb98c417c5e3cfbbdfbfbeaad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20260120, tcib_managed=true, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Jan 23 12:02:42 compute-0 nova_compute[185173]: 2026-01-23 12:02:42.748 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:02:43 compute-0 nova_compute[185173]: 2026-01-23 12:02:43.586 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:02:45 compute-0 podman[244432]: 2026-01-23 12:02:45.814748236 +0000 UTC m=+0.126872298 container health_status 1cc877fed4914980324cf4c0d6ba23743fd113442cee4d49cc1a59e402757170 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 23 12:02:46 compute-0 ovn_controller[97581]: 2026-01-23T12:02:46Z|00057|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Jan 23 12:02:47 compute-0 nova_compute[185173]: 2026-01-23 12:02:47.749 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:02:48 compute-0 nova_compute[185173]: 2026-01-23 12:02:48.588 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:02:50 compute-0 podman[244459]: 2026-01-23 12:02:50.747214339 +0000 UTC m=+0.070264067 container health_status adf529ba1b6aae11f18bcfacdd7f5850af0b6e6af2250d4a705be9c346f3f5af (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, config_id=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 23 12:02:52 compute-0 sshd-session[244478]: Accepted publickey for zuul from 38.102.83.196 port 60420 ssh2: RSA SHA256:l5/z7/B1LZInfKNQYpI40S/PX6fnGwoDdxTfZ/2+PpU
Jan 23 12:02:52 compute-0 systemd-logind[798]: New session 30 of user zuul.
Jan 23 12:02:52 compute-0 systemd[1]: Started Session 30 of User zuul.
Jan 23 12:02:52 compute-0 sshd-session[244478]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 23 12:02:52 compute-0 podman[244480]: 2026-01-23 12:02:52.620190147 +0000 UTC m=+0.078609680 container health_status 900ef841977ab427bb05b895d10e0cac749b9185cccc7bb7aaf2b3886aa6449a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9, config_id=kepler, io.buildah.version=1.29.0, vcs-type=git, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, vendor=Red Hat, Inc., distribution-scope=public, release-0.7.12=, com.redhat.component=ubi9-container, io.openshift.expose-services=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 23 12:02:52 compute-0 nova_compute[185173]: 2026-01-23 12:02:52.752 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:02:53 compute-0 sudo[244672]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uyyrtyeeeplnjetmrlufxnkflwcieekr ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769169772.7046294-59610-91580141568512/AnsiballZ_command.py'
Jan 23 12:02:53 compute-0 sudo[244672]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 12:02:53 compute-0 python3[244674]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep ceilometer_agent_compute
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 12:02:53 compute-0 sudo[244672]: pam_unix(sudo:session): session closed for user root
Jan 23 12:02:53 compute-0 nova_compute[185173]: 2026-01-23 12:02:53.590 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:02:55 compute-0 podman[244712]: 2026-01-23 12:02:55.748901696 +0000 UTC m=+0.065676454 container health_status 99ee297e6e25b500e7af118e58bbafc761d2fd7202cdfcf4c976c2a99866b5ef (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 23 12:02:57 compute-0 nova_compute[185173]: 2026-01-23 12:02:57.754 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:02:58 compute-0 nova_compute[185173]: 2026-01-23 12:02:58.593 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:02:59 compute-0 podman[201022]: time="2026-01-23T12:02:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 23 12:02:59 compute-0 podman[201022]: @ - - [23/Jan/2026:12:02:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28508 "" "Go-http-client/1.1"
Jan 23 12:02:59 compute-0 podman[201022]: @ - - [23/Jan/2026:12:02:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4384 "" "Go-http-client/1.1"
Jan 23 12:03:01 compute-0 openstack_network_exporter[204160]: ERROR   12:03:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 23 12:03:01 compute-0 openstack_network_exporter[204160]: 
Jan 23 12:03:01 compute-0 openstack_network_exporter[204160]: ERROR   12:03:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 23 12:03:01 compute-0 openstack_network_exporter[204160]: 
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.456 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.457 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.457 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28411ca9c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.458 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f28410bc7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.458 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be810>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28411ca9c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.459 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be840>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28411ca9c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.459 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28411ca9c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.460 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be8a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28411ca9c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.461 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc8f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28411ca9c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.461 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be900>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28411ca9c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.461 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bf140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28411ca9c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.463 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be960>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28411ca9c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.463 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '55846fbf-a87a-4cba-be0b-23125d3d9ef4', 'name': 'test_0', 'flavor': {'id': 'f2c5c5dd-a580-4885-a3ab-a766eac401c8', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'c5833e41-b4db-454e-8f49-014aa18c7dc5'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'bd16a0de2f5e4a8480a855ef0e1a3f14', 'user_id': 'd9858533c2284846a8f0f19a1fb45045', 'hostId': '47f89b8956aaa9163f724166aabd4216eadbb2bd951d24f4c87e1ecb', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.463 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2842f61190>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28411ca9c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.464 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28411c9190>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28411ca9c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.464 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be9c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28411ca9c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.465 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bf1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28411ca9c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.465 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bec00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28411ca9c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.466 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bf440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28411ca9c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.467 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'e9de5be9-383e-4139-a192-9a00ac9030d0', 'name': 'vn-i4gqh4k-b64ilmmiw3co-dxxhdi3z36fs-vnf-e3wngllyc55g', 'flavor': {'id': 'f2c5c5dd-a580-4885-a3ab-a766eac401c8', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'c5833e41-b4db-454e-8f49-014aa18c7dc5'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'bd16a0de2f5e4a8480a855ef0e1a3f14', 'user_id': 'd9858533c2284846a8f0f19a1fb45045', 'hostId': '47f89b8956aaa9163f724166aabd4216eadbb2bd951d24f4c87e1ecb', 'status': 'active', 'metadata': {'metering.server_group': '500baa09-1e39-474e-b275-8b2dffe3a65b'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.468 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.468 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bc800>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.468 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bc800>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.467 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bec60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28411ca9c0>] with cache [{}], pollster history [{'network.outgoing.bytes.delta': [<NovaLikeServer: test_0>, <NovaLikeServer: vn-i4gqh4k-b64ilmmiw3co-dxxhdi3z36fs-vnf-e3wngllyc55g>]}], and discovery cache [{'local_instances': [<NovaLikeServer: test_0>, <NovaLikeServer: vn-i4gqh4k-b64ilmmiw3co-dxxhdi3z36fs-vnf-e3wngllyc55g>]}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.469 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-01-23T12:03:01.468777) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.468 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.470 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2842f83560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28411ca9c0>] with cache [{'inspect_vnics': {}}], pollster history [{'network.outgoing.bytes.delta': [<NovaLikeServer: test_0>, <NovaLikeServer: vn-i4gqh4k-b64ilmmiw3co-dxxhdi3z36fs-vnf-e3wngllyc55g>]}], and discovery cache [{'local_instances': [<NovaLikeServer: test_0>, <NovaLikeServer: vn-i4gqh4k-b64ilmmiw3co-dxxhdi3z36fs-vnf-e3wngllyc55g>]}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.471 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28411ca9c0>] with cache [{'inspect_vnics': {}}], pollster history [{'network.outgoing.bytes.delta': [<NovaLikeServer: test_0>, <NovaLikeServer: vn-i4gqh4k-b64ilmmiw3co-dxxhdi3z36fs-vnf-e3wngllyc55g>]}], and discovery cache [{'local_instances': [<NovaLikeServer: test_0>, <NovaLikeServer: vn-i4gqh4k-b64ilmmiw3co-dxxhdi3z36fs-vnf-e3wngllyc55g>]}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.472 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc5c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28411ca9c0>] with cache [{'inspect_vnics': {}}], pollster history [{'network.outgoing.bytes.delta': [<NovaLikeServer: test_0>, <NovaLikeServer: vn-i4gqh4k-b64ilmmiw3co-dxxhdi3z36fs-vnf-e3wngllyc55g>]}], and discovery cache [{'local_instances': [<NovaLikeServer: test_0>, <NovaLikeServer: vn-i4gqh4k-b64ilmmiw3co-dxxhdi3z36fs-vnf-e3wngllyc55g>]}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.472 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28411ca9c0>] with cache [{'inspect_vnics': {}}], pollster history [{'network.outgoing.bytes.delta': [<NovaLikeServer: test_0>, <NovaLikeServer: vn-i4gqh4k-b64ilmmiw3co-dxxhdi3z36fs-vnf-e3wngllyc55g>]}], and discovery cache [{'local_instances': [<NovaLikeServer: test_0>, <NovaLikeServer: vn-i4gqh4k-b64ilmmiw3co-dxxhdi3z36fs-vnf-e3wngllyc55g>]}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.473 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be660>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28411ca9c0>] with cache [{'inspect_vnics': {}}], pollster history [{'network.outgoing.bytes.delta': [<NovaLikeServer: test_0>, <NovaLikeServer: vn-i4gqh4k-b64ilmmiw3co-dxxhdi3z36fs-vnf-e3wngllyc55g>]}], and discovery cache [{'local_instances': [<NovaLikeServer: test_0>, <NovaLikeServer: vn-i4gqh4k-b64ilmmiw3co-dxxhdi3z36fs-vnf-e3wngllyc55g>]}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.473 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28411ca9c0>] with cache [{'inspect_vnics': {}}], pollster history [{'network.outgoing.bytes.delta': [<NovaLikeServer: test_0>, <NovaLikeServer: vn-i4gqh4k-b64ilmmiw3co-dxxhdi3z36fs-vnf-e3wngllyc55g>]}], and discovery cache [{'local_instances': [<NovaLikeServer: test_0>, <NovaLikeServer: vn-i4gqh4k-b64ilmmiw3co-dxxhdi3z36fs-vnf-e3wngllyc55g>]}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.473 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28411ca9c0>] with cache [{'inspect_vnics': {}}], pollster history [{'network.outgoing.bytes.delta': [<NovaLikeServer: test_0>, <NovaLikeServer: vn-i4gqh4k-b64ilmmiw3co-dxxhdi3z36fs-vnf-e3wngllyc55g>]}], and discovery cache [{'local_instances': [<NovaLikeServer: test_0>, <NovaLikeServer: vn-i4gqh4k-b64ilmmiw3co-dxxhdi3z36fs-vnf-e3wngllyc55g>]}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.473 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2842f1af60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28411ca9c0>] with cache [{'inspect_vnics': {}}], pollster history [{'network.outgoing.bytes.delta': [<NovaLikeServer: test_0>, <NovaLikeServer: vn-i4gqh4k-b64ilmmiw3co-dxxhdi3z36fs-vnf-e3wngllyc55g>]}], and discovery cache [{'local_instances': [<NovaLikeServer: test_0>, <NovaLikeServer: vn-i4gqh4k-b64ilmmiw3co-dxxhdi3z36fs-vnf-e3wngllyc55g>]}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.473 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28411ca9c0>] with cache [{'inspect_vnics': {}}], pollster history [{'network.outgoing.bytes.delta': [<NovaLikeServer: test_0>, <NovaLikeServer: vn-i4gqh4k-b64ilmmiw3co-dxxhdi3z36fs-vnf-e3wngllyc55g>]}], and discovery cache [{'local_instances': [<NovaLikeServer: test_0>, <NovaLikeServer: vn-i4gqh4k-b64ilmmiw3co-dxxhdi3z36fs-vnf-e3wngllyc55g>]}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.473 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be7b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28411ca9c0>] with cache [{'inspect_vnics': {}}], pollster history [{'network.outgoing.bytes.delta': [<NovaLikeServer: test_0>, <NovaLikeServer: vn-i4gqh4k-b64ilmmiw3co-dxxhdi3z36fs-vnf-e3wngllyc55g>]}], and discovery cache [{'local_instances': [<NovaLikeServer: test_0>, <NovaLikeServer: vn-i4gqh4k-b64ilmmiw3co-dxxhdi3z36fs-vnf-e3wngllyc55g>]}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.476 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.480 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.481 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.481 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f28410be7e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.481 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.481 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410be810>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.481 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410be810>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.481 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.482 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-01-23T12:03:01.481901) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.503 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.503 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.504 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.528 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/disk.device.usage volume: 21364736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.528 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.528 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.529 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.529 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f28411c9b80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.529 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.529 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410be840>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.529 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410be840>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.530 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.530 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-01-23T12:03:01.530072) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.597 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.598 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.598 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.663 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/disk.device.write.bytes volume: 41861120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.664 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.664 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.665 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.665 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f28410bc830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.665 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.665 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f28410be870>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.665 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.666 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410be8a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.666 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410be8a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.666 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.666 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.write.latency volume: 1669208630 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.666 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.write.latency volume: 8106790 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.666 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.667 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/disk.device.write.latency volume: 600800165 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.667 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/disk.device.write.latency volume: 7490744 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.667 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.668 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.668 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f28410bc8c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.668 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.668 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bc8f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.668 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bc8f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.668 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-01-23T12:03:01.666164) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.669 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.669 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.669 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.669 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-01-23T12:03:01.669000) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.670 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.670 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f28410be8d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.670 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.670 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410be900>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.670 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410be900>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.670 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.670 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-01-23T12:03:01.670636) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.670 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.write.requests volume: 234 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.671 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.671 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.671 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/disk.device.write.requests volume: 236 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.671 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.672 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.672 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.672 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f28410bef30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.672 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.672 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bf140>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.672 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bf140>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.673 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.673 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.673 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.673 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-01-23T12:03:01.673086) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.674 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.674 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f28410be930>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.674 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.674 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410be960>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.674 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410be960>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.674 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.675 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.675 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-01-23T12:03:01.674743) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.675 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f28410be750>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.675 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.675 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f2842f61190>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.675 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f2842f61190>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.676 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.676 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.read.latency volume: 639933059 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.676 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.read.latency volume: 72530295 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.676 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-01-23T12:03:01.675991) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.676 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.read.latency volume: 43879093 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.677 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/disk.device.read.latency volume: 327509499 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.677 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/disk.device.read.latency volume: 57556257 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.677 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/disk.device.read.latency volume: 50069079 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.678 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.678 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f28411a4c50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.678 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.678 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28411c9190>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.678 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28411c9190>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.678 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.678 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.679 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.679 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.679 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-01-23T12:03:01.678764) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.679 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.680 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.680 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.680 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.681 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f28410be990>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.681 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.681 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410be9c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.681 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410be9c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.681 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.682 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.682 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f28410bf1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.682 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-01-23T12:03:01.681499) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.682 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.682 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bf1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.682 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bf1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.682 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.682 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.683 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.683 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.683 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-01-23T12:03:01.682727) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.683 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f28410bebd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.684 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.684 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bec00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.684 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bec00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.684 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.684 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-01-23T12:03:01.684398) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.713 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/memory.usage volume: 48.76171875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.734 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/memory.usage volume: 48.890625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.735 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.735 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f28410bf410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.735 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.735 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bf440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.735 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bf440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.735 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.735 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.736 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.736 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-01-23T12:03:01.735725) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.736 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.736 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f28410bec30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.737 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.737 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bec60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.737 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bec60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.737 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.737 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/network.incoming.bytes volume: 2304 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.737 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/network.incoming.bytes volume: 1654 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.738 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-01-23T12:03:01.737303) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.738 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.738 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f28410bcfb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.738 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.738 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f2842f83560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.738 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f2842f83560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.738 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.739 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/cpu volume: 42350000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.739 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-01-23T12:03:01.738851) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.739 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/cpu volume: 33110000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.739 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.739 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f28410bc920>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.739 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.740 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f28410bc5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.740 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.740 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bc5c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.740 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bc5c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.740 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.740 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/network.incoming.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.740 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/network.incoming.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.741 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.741 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f28410bc890>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.741 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.741 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-01-23T12:03:01.740501) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.741 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bc650>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.741 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bc650>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.742 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.742 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.742 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-01-23T12:03:01.741998) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.742 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.742 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.743 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f28410be720>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.743 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.743 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410be660>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.743 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410be660>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.743 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.743 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-01-23T12:03:01.743561) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.743 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.744 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.744 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.744 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.744 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.745 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.745 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.745 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f28410bc6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.745 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.745 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bc680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.745 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bc680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.746 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.746 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-01-23T12:03:01.746076) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.746 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.746 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.746 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.747 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f28410bec90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.747 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.747 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bc6e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.747 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bc6e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.747 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.747 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.748 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.748 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-01-23T12:03:01.747612) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.748 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.748 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f284322b260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.748 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.748 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f2842f1af60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.748 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f2842f1af60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.749 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.749 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.749 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.749 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-01-23T12:03:01.749029) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.749 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.750 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.750 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.750 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.751 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.751 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f28410bc740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.751 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.751 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bc770>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.751 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bc770>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.751 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.751 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-01-23T12:03:01.751682) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.751 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/network.outgoing.bytes volume: 2342 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.752 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/network.outgoing.bytes volume: 2356 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.752 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.752 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f28410be780>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.752 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.752 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410be7b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.753 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410be7b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.753 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.753 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-01-23T12:03:01.753111) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.753 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.753 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.753 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.754 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.754 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.754 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.755 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.755 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.755 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.755 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.755 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.755 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.756 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.756 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.756 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.756 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.756 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.756 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.756 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.756 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.756 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.756 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.756 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.756 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.756 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.756 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.757 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.757 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.757 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.757 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.757 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.757 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:03:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:03:01.757 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:03:02 compute-0 nova_compute[185173]: 2026-01-23 12:03:02.757 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:03:03 compute-0 nova_compute[185173]: 2026-01-23 12:03:03.595 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:03:03 compute-0 podman[244735]: 2026-01-23 12:03:03.763042895 +0000 UTC m=+0.091421633 container health_status cde20f10ae383cce1365a41265bac0a75ea71c31a21a1539f187bef9d678e8d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., vcs-type=git, container_name=openstack_network_exporter, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, release=1755695350, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, version=9.6, architecture=x86_64, config_id=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 23 12:03:07 compute-0 nova_compute[185173]: 2026-01-23 12:03:07.759 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:03:08 compute-0 nova_compute[185173]: 2026-01-23 12:03:08.598 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:03:09 compute-0 sshd-session[244756]: Invalid user ubuntu from 45.148.10.240 port 40196
Jan 23 12:03:09 compute-0 sshd-session[244756]: Connection closed by invalid user ubuntu 45.148.10.240 port 40196 [preauth]
Jan 23 12:03:11 compute-0 podman[244758]: 2026-01-23 12:03:11.80201944 +0000 UTC m=+0.105331742 container health_status 48bfd3e93cfb033a8917f154ab637a84f3f60f7609564292c230ce848bae7693 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 23 12:03:11 compute-0 podman[244759]: 2026-01-23 12:03:11.802216995 +0000 UTC m=+0.100081854 container health_status 6ec039018dddd109dd56b3f3912ce4a80c166b5fb98c417c5e3cfbbdfbfbeaad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260120, org.label-schema.name=CentOS Stream 10 Base Image, config_id=ceilometer_agent_compute, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, container_name=ceilometer_agent_compute)
Jan 23 12:03:11 compute-0 podman[244760]: 2026-01-23 12:03:11.823597747 +0000 UTC m=+0.116811303 container health_status d96827cd9c29e53bbdf4cef10942608e4ba405294733072b4aa624c0238e2ed8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 12:03:12 compute-0 nova_compute[185173]: 2026-01-23 12:03:12.763 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:03:13 compute-0 nova_compute[185173]: 2026-01-23 12:03:13.121 185177 DEBUG oslo_concurrency.lockutils [None req-78ffecd2-0828-4344-8eec-4895263ee4ec d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Acquiring lock "fda07229-b97e-4868-9f08-7b1def0956ad" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 12:03:13 compute-0 nova_compute[185173]: 2026-01-23 12:03:13.121 185177 DEBUG oslo_concurrency.lockutils [None req-78ffecd2-0828-4344-8eec-4895263ee4ec d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Lock "fda07229-b97e-4868-9f08-7b1def0956ad" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 12:03:13 compute-0 nova_compute[185173]: 2026-01-23 12:03:13.170 185177 DEBUG nova.compute.manager [None req-78ffecd2-0828-4344-8eec-4895263ee4ec d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: fda07229-b97e-4868-9f08-7b1def0956ad] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 23 12:03:13 compute-0 nova_compute[185173]: 2026-01-23 12:03:13.257 185177 DEBUG oslo_concurrency.lockutils [None req-78ffecd2-0828-4344-8eec-4895263ee4ec d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 12:03:13 compute-0 nova_compute[185173]: 2026-01-23 12:03:13.258 185177 DEBUG oslo_concurrency.lockutils [None req-78ffecd2-0828-4344-8eec-4895263ee4ec d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 12:03:13 compute-0 nova_compute[185173]: 2026-01-23 12:03:13.268 185177 DEBUG nova.virt.hardware [None req-78ffecd2-0828-4344-8eec-4895263ee4ec d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 23 12:03:13 compute-0 nova_compute[185173]: 2026-01-23 12:03:13.268 185177 INFO nova.compute.claims [None req-78ffecd2-0828-4344-8eec-4895263ee4ec d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: fda07229-b97e-4868-9f08-7b1def0956ad] Claim successful on node compute-0.ctlplane.example.com
Jan 23 12:03:13 compute-0 nova_compute[185173]: 2026-01-23 12:03:13.406 185177 DEBUG nova.compute.provider_tree [None req-78ffecd2-0828-4344-8eec-4895263ee4ec d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Inventory has not changed in ProviderTree for provider: 77dd020c-2f5c-40b0-b660-8a95a28aabbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 23 12:03:13 compute-0 nova_compute[185173]: 2026-01-23 12:03:13.422 185177 DEBUG nova.scheduler.client.report [None req-78ffecd2-0828-4344-8eec-4895263ee4ec d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Inventory has not changed for provider 77dd020c-2f5c-40b0-b660-8a95a28aabbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 23 12:03:13 compute-0 nova_compute[185173]: 2026-01-23 12:03:13.445 185177 DEBUG oslo_concurrency.lockutils [None req-78ffecd2-0828-4344-8eec-4895263ee4ec d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.187s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 12:03:13 compute-0 nova_compute[185173]: 2026-01-23 12:03:13.446 185177 DEBUG nova.compute.manager [None req-78ffecd2-0828-4344-8eec-4895263ee4ec d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: fda07229-b97e-4868-9f08-7b1def0956ad] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 23 12:03:13 compute-0 nova_compute[185173]: 2026-01-23 12:03:13.483 185177 DEBUG nova.compute.manager [None req-78ffecd2-0828-4344-8eec-4895263ee4ec d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: fda07229-b97e-4868-9f08-7b1def0956ad] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948
Jan 23 12:03:13 compute-0 nova_compute[185173]: 2026-01-23 12:03:13.497 185177 INFO nova.virt.libvirt.driver [None req-78ffecd2-0828-4344-8eec-4895263ee4ec d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: fda07229-b97e-4868-9f08-7b1def0956ad] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 23 12:03:13 compute-0 nova_compute[185173]: 2026-01-23 12:03:13.533 185177 DEBUG nova.compute.manager [None req-78ffecd2-0828-4344-8eec-4895263ee4ec d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: fda07229-b97e-4868-9f08-7b1def0956ad] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 23 12:03:13 compute-0 nova_compute[185173]: 2026-01-23 12:03:13.600 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:03:13 compute-0 nova_compute[185173]: 2026-01-23 12:03:13.632 185177 DEBUG nova.compute.manager [None req-78ffecd2-0828-4344-8eec-4895263ee4ec d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: fda07229-b97e-4868-9f08-7b1def0956ad] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 23 12:03:13 compute-0 nova_compute[185173]: 2026-01-23 12:03:13.634 185177 DEBUG nova.virt.libvirt.driver [None req-78ffecd2-0828-4344-8eec-4895263ee4ec d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: fda07229-b97e-4868-9f08-7b1def0956ad] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 23 12:03:13 compute-0 nova_compute[185173]: 2026-01-23 12:03:13.634 185177 INFO nova.virt.libvirt.driver [None req-78ffecd2-0828-4344-8eec-4895263ee4ec d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: fda07229-b97e-4868-9f08-7b1def0956ad] Creating image(s)
Jan 23 12:03:13 compute-0 nova_compute[185173]: 2026-01-23 12:03:13.635 185177 DEBUG oslo_concurrency.lockutils [None req-78ffecd2-0828-4344-8eec-4895263ee4ec d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Acquiring lock "/var/lib/nova/instances/fda07229-b97e-4868-9f08-7b1def0956ad/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 12:03:13 compute-0 nova_compute[185173]: 2026-01-23 12:03:13.635 185177 DEBUG oslo_concurrency.lockutils [None req-78ffecd2-0828-4344-8eec-4895263ee4ec d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Lock "/var/lib/nova/instances/fda07229-b97e-4868-9f08-7b1def0956ad/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 12:03:13 compute-0 nova_compute[185173]: 2026-01-23 12:03:13.636 185177 DEBUG oslo_concurrency.lockutils [None req-78ffecd2-0828-4344-8eec-4895263ee4ec d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Lock "/var/lib/nova/instances/fda07229-b97e-4868-9f08-7b1def0956ad/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 12:03:13 compute-0 nova_compute[185173]: 2026-01-23 12:03:13.636 185177 DEBUG oslo_concurrency.lockutils [None req-78ffecd2-0828-4344-8eec-4895263ee4ec d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Acquiring lock "39bc7a80f52b51e72d1db925602ddd475bf13511" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 12:03:13 compute-0 nova_compute[185173]: 2026-01-23 12:03:13.637 185177 DEBUG oslo_concurrency.lockutils [None req-78ffecd2-0828-4344-8eec-4895263ee4ec d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Lock "39bc7a80f52b51e72d1db925602ddd475bf13511" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 12:03:14 compute-0 nova_compute[185173]: 2026-01-23 12:03:14.766 185177 DEBUG oslo_concurrency.processutils [None req-78ffecd2-0828-4344-8eec-4895263ee4ec d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/39bc7a80f52b51e72d1db925602ddd475bf13511.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 12:03:14 compute-0 nova_compute[185173]: 2026-01-23 12:03:14.826 185177 DEBUG oslo_concurrency.processutils [None req-78ffecd2-0828-4344-8eec-4895263ee4ec d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/39bc7a80f52b51e72d1db925602ddd475bf13511.part --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 12:03:14 compute-0 nova_compute[185173]: 2026-01-23 12:03:14.827 185177 DEBUG nova.virt.images [None req-78ffecd2-0828-4344-8eec-4895263ee4ec d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] 06f70bc4-7667-428f-90da-f4c7fb4cfe6a was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Jan 23 12:03:14 compute-0 nova_compute[185173]: 2026-01-23 12:03:14.828 185177 DEBUG nova.privsep.utils [None req-78ffecd2-0828-4344-8eec-4895263ee4ec d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Jan 23 12:03:14 compute-0 nova_compute[185173]: 2026-01-23 12:03:14.828 185177 DEBUG oslo_concurrency.processutils [None req-78ffecd2-0828-4344-8eec-4895263ee4ec d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/39bc7a80f52b51e72d1db925602ddd475bf13511.part /var/lib/nova/instances/_base/39bc7a80f52b51e72d1db925602ddd475bf13511.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 12:03:14 compute-0 nova_compute[185173]: 2026-01-23 12:03:14.990 185177 DEBUG oslo_concurrency.processutils [None req-78ffecd2-0828-4344-8eec-4895263ee4ec d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/39bc7a80f52b51e72d1db925602ddd475bf13511.part /var/lib/nova/instances/_base/39bc7a80f52b51e72d1db925602ddd475bf13511.converted" returned: 0 in 0.162s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 12:03:14 compute-0 nova_compute[185173]: 2026-01-23 12:03:14.994 185177 DEBUG oslo_concurrency.processutils [None req-78ffecd2-0828-4344-8eec-4895263ee4ec d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/39bc7a80f52b51e72d1db925602ddd475bf13511.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 12:03:15 compute-0 nova_compute[185173]: 2026-01-23 12:03:15.051 185177 DEBUG oslo_concurrency.processutils [None req-78ffecd2-0828-4344-8eec-4895263ee4ec d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/39bc7a80f52b51e72d1db925602ddd475bf13511.converted --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 12:03:15 compute-0 nova_compute[185173]: 2026-01-23 12:03:15.053 185177 DEBUG oslo_concurrency.lockutils [None req-78ffecd2-0828-4344-8eec-4895263ee4ec d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Lock "39bc7a80f52b51e72d1db925602ddd475bf13511" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 1.416s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 12:03:15 compute-0 nova_compute[185173]: 2026-01-23 12:03:15.068 185177 DEBUG oslo_concurrency.processutils [None req-78ffecd2-0828-4344-8eec-4895263ee4ec d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/39bc7a80f52b51e72d1db925602ddd475bf13511 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 12:03:15 compute-0 nova_compute[185173]: 2026-01-23 12:03:15.122 185177 DEBUG oslo_concurrency.processutils [None req-78ffecd2-0828-4344-8eec-4895263ee4ec d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/39bc7a80f52b51e72d1db925602ddd475bf13511 --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 12:03:15 compute-0 nova_compute[185173]: 2026-01-23 12:03:15.123 185177 DEBUG oslo_concurrency.lockutils [None req-78ffecd2-0828-4344-8eec-4895263ee4ec d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Acquiring lock "39bc7a80f52b51e72d1db925602ddd475bf13511" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 12:03:15 compute-0 nova_compute[185173]: 2026-01-23 12:03:15.124 185177 DEBUG oslo_concurrency.lockutils [None req-78ffecd2-0828-4344-8eec-4895263ee4ec d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Lock "39bc7a80f52b51e72d1db925602ddd475bf13511" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 12:03:15 compute-0 nova_compute[185173]: 2026-01-23 12:03:15.136 185177 DEBUG oslo_concurrency.processutils [None req-78ffecd2-0828-4344-8eec-4895263ee4ec d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/39bc7a80f52b51e72d1db925602ddd475bf13511 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 12:03:15 compute-0 nova_compute[185173]: 2026-01-23 12:03:15.191 185177 DEBUG oslo_concurrency.processutils [None req-78ffecd2-0828-4344-8eec-4895263ee4ec d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/39bc7a80f52b51e72d1db925602ddd475bf13511 --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 12:03:15 compute-0 nova_compute[185173]: 2026-01-23 12:03:15.193 185177 DEBUG oslo_concurrency.processutils [None req-78ffecd2-0828-4344-8eec-4895263ee4ec d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/39bc7a80f52b51e72d1db925602ddd475bf13511,backing_fmt=raw /var/lib/nova/instances/fda07229-b97e-4868-9f08-7b1def0956ad/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 12:03:15 compute-0 nova_compute[185173]: 2026-01-23 12:03:15.229 185177 DEBUG oslo_concurrency.processutils [None req-78ffecd2-0828-4344-8eec-4895263ee4ec d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/39bc7a80f52b51e72d1db925602ddd475bf13511,backing_fmt=raw /var/lib/nova/instances/fda07229-b97e-4868-9f08-7b1def0956ad/disk 1073741824" returned: 0 in 0.036s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 12:03:15 compute-0 nova_compute[185173]: 2026-01-23 12:03:15.230 185177 DEBUG oslo_concurrency.lockutils [None req-78ffecd2-0828-4344-8eec-4895263ee4ec d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Lock "39bc7a80f52b51e72d1db925602ddd475bf13511" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.106s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 12:03:15 compute-0 nova_compute[185173]: 2026-01-23 12:03:15.231 185177 DEBUG oslo_concurrency.processutils [None req-78ffecd2-0828-4344-8eec-4895263ee4ec d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/39bc7a80f52b51e72d1db925602ddd475bf13511 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 12:03:15 compute-0 nova_compute[185173]: 2026-01-23 12:03:15.248 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:03:15 compute-0 nova_compute[185173]: 2026-01-23 12:03:15.291 185177 DEBUG oslo_concurrency.processutils [None req-78ffecd2-0828-4344-8eec-4895263ee4ec d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/39bc7a80f52b51e72d1db925602ddd475bf13511 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 12:03:15 compute-0 nova_compute[185173]: 2026-01-23 12:03:15.293 185177 DEBUG nova.virt.disk.api [None req-78ffecd2-0828-4344-8eec-4895263ee4ec d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Checking if we can resize image /var/lib/nova/instances/fda07229-b97e-4868-9f08-7b1def0956ad/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Jan 23 12:03:15 compute-0 nova_compute[185173]: 2026-01-23 12:03:15.294 185177 DEBUG oslo_concurrency.processutils [None req-78ffecd2-0828-4344-8eec-4895263ee4ec d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fda07229-b97e-4868-9f08-7b1def0956ad/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 12:03:15 compute-0 nova_compute[185173]: 2026-01-23 12:03:15.355 185177 DEBUG oslo_concurrency.processutils [None req-78ffecd2-0828-4344-8eec-4895263ee4ec d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fda07229-b97e-4868-9f08-7b1def0956ad/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 12:03:15 compute-0 nova_compute[185173]: 2026-01-23 12:03:15.357 185177 DEBUG nova.virt.disk.api [None req-78ffecd2-0828-4344-8eec-4895263ee4ec d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Cannot resize image /var/lib/nova/instances/fda07229-b97e-4868-9f08-7b1def0956ad/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Jan 23 12:03:15 compute-0 nova_compute[185173]: 2026-01-23 12:03:15.357 185177 DEBUG nova.objects.instance [None req-78ffecd2-0828-4344-8eec-4895263ee4ec d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Lazy-loading 'migration_context' on Instance uuid fda07229-b97e-4868-9f08-7b1def0956ad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 23 12:03:15 compute-0 nova_compute[185173]: 2026-01-23 12:03:15.584 185177 DEBUG oslo_concurrency.lockutils [None req-78ffecd2-0828-4344-8eec-4895263ee4ec d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Acquiring lock "/var/lib/nova/instances/fda07229-b97e-4868-9f08-7b1def0956ad/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 12:03:15 compute-0 nova_compute[185173]: 2026-01-23 12:03:15.585 185177 DEBUG oslo_concurrency.lockutils [None req-78ffecd2-0828-4344-8eec-4895263ee4ec d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Lock "/var/lib/nova/instances/fda07229-b97e-4868-9f08-7b1def0956ad/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 12:03:15 compute-0 nova_compute[185173]: 2026-01-23 12:03:15.586 185177 DEBUG oslo_concurrency.lockutils [None req-78ffecd2-0828-4344-8eec-4895263ee4ec d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Lock "/var/lib/nova/instances/fda07229-b97e-4868-9f08-7b1def0956ad/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 12:03:15 compute-0 nova_compute[185173]: 2026-01-23 12:03:15.602 185177 DEBUG oslo_concurrency.processutils [None req-78ffecd2-0828-4344-8eec-4895263ee4ec d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 12:03:15 compute-0 nova_compute[185173]: 2026-01-23 12:03:15.663 185177 DEBUG oslo_concurrency.processutils [None req-78ffecd2-0828-4344-8eec-4895263ee4ec d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 12:03:15 compute-0 nova_compute[185173]: 2026-01-23 12:03:15.664 185177 DEBUG oslo_concurrency.lockutils [None req-78ffecd2-0828-4344-8eec-4895263ee4ec d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 12:03:15 compute-0 nova_compute[185173]: 2026-01-23 12:03:15.665 185177 DEBUG oslo_concurrency.lockutils [None req-78ffecd2-0828-4344-8eec-4895263ee4ec d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 12:03:15 compute-0 nova_compute[185173]: 2026-01-23 12:03:15.676 185177 DEBUG oslo_concurrency.processutils [None req-78ffecd2-0828-4344-8eec-4895263ee4ec d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 12:03:15 compute-0 nova_compute[185173]: 2026-01-23 12:03:15.734 185177 DEBUG oslo_concurrency.processutils [None req-78ffecd2-0828-4344-8eec-4895263ee4ec d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 12:03:15 compute-0 nova_compute[185173]: 2026-01-23 12:03:15.736 185177 DEBUG oslo_concurrency.processutils [None req-78ffecd2-0828-4344-8eec-4895263ee4ec d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/fda07229-b97e-4868-9f08-7b1def0956ad/disk.eph0 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 12:03:15 compute-0 nova_compute[185173]: 2026-01-23 12:03:15.776 185177 DEBUG oslo_concurrency.processutils [None req-78ffecd2-0828-4344-8eec-4895263ee4ec d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/fda07229-b97e-4868-9f08-7b1def0956ad/disk.eph0 1073741824" returned: 0 in 0.040s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 12:03:15 compute-0 nova_compute[185173]: 2026-01-23 12:03:15.777 185177 DEBUG oslo_concurrency.lockutils [None req-78ffecd2-0828-4344-8eec-4895263ee4ec d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.112s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 12:03:15 compute-0 nova_compute[185173]: 2026-01-23 12:03:15.778 185177 DEBUG oslo_concurrency.processutils [None req-78ffecd2-0828-4344-8eec-4895263ee4ec d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 12:03:15 compute-0 nova_compute[185173]: 2026-01-23 12:03:15.835 185177 DEBUG oslo_concurrency.processutils [None req-78ffecd2-0828-4344-8eec-4895263ee4ec d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 12:03:15 compute-0 nova_compute[185173]: 2026-01-23 12:03:15.836 185177 DEBUG nova.virt.libvirt.driver [None req-78ffecd2-0828-4344-8eec-4895263ee4ec d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: fda07229-b97e-4868-9f08-7b1def0956ad] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 23 12:03:15 compute-0 nova_compute[185173]: 2026-01-23 12:03:15.837 185177 DEBUG nova.virt.libvirt.driver [None req-78ffecd2-0828-4344-8eec-4895263ee4ec d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: fda07229-b97e-4868-9f08-7b1def0956ad] Ensure instance console log exists: /var/lib/nova/instances/fda07229-b97e-4868-9f08-7b1def0956ad/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 23 12:03:15 compute-0 nova_compute[185173]: 2026-01-23 12:03:15.838 185177 DEBUG oslo_concurrency.lockutils [None req-78ffecd2-0828-4344-8eec-4895263ee4ec d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 12:03:15 compute-0 nova_compute[185173]: 2026-01-23 12:03:15.838 185177 DEBUG oslo_concurrency.lockutils [None req-78ffecd2-0828-4344-8eec-4895263ee4ec d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 12:03:15 compute-0 nova_compute[185173]: 2026-01-23 12:03:15.839 185177 DEBUG oslo_concurrency.lockutils [None req-78ffecd2-0828-4344-8eec-4895263ee4ec d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 12:03:15 compute-0 nova_compute[185173]: 2026-01-23 12:03:15.841 185177 DEBUG nova.virt.libvirt.driver [None req-78ffecd2-0828-4344-8eec-4895263ee4ec d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: fda07229-b97e-4868-9f08-7b1def0956ad] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2026-01-23T12:02:57Z,direct_url=<?>,disk_format='qcow2',id=06f70bc4-7667-428f-90da-f4c7fb4cfe6a,min_disk=0,min_ram=0,name='fvt_testing_image',owner='bd16a0de2f5e4a8480a855ef0e1a3f14',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2026-01-23T12:03:02Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'disk_bus': 'virtio', 'encrypted': False, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'size': 0, 'encryption_options': None, 'encryption_secret_uuid': None, 'boot_index': 0, 'image_id': '06f70bc4-7667-428f-90da-f4c7fb4cfe6a'}], 'ephemerals': [{'encryption_secret_uuid': None, 'encryption_format': None, 'disk_bus': 'virtio', 'encrypted': False, 'device_type': 'disk', 'device_name': '/dev/vdb', 'size': 1, 'encryption_options': None, 'guest_format': None}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 23 12:03:15 compute-0 nova_compute[185173]: 2026-01-23 12:03:15.848 185177 WARNING nova.virt.libvirt.driver [None req-78ffecd2-0828-4344-8eec-4895263ee4ec d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 23 12:03:15 compute-0 nova_compute[185173]: 2026-01-23 12:03:15.854 185177 DEBUG nova.virt.libvirt.host [None req-78ffecd2-0828-4344-8eec-4895263ee4ec d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 23 12:03:15 compute-0 nova_compute[185173]: 2026-01-23 12:03:15.855 185177 DEBUG nova.virt.libvirt.host [None req-78ffecd2-0828-4344-8eec-4895263ee4ec d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 23 12:03:15 compute-0 nova_compute[185173]: 2026-01-23 12:03:15.859 185177 DEBUG nova.virt.libvirt.host [None req-78ffecd2-0828-4344-8eec-4895263ee4ec d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 23 12:03:15 compute-0 nova_compute[185173]: 2026-01-23 12:03:15.860 185177 DEBUG nova.virt.libvirt.host [None req-78ffecd2-0828-4344-8eec-4895263ee4ec d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 23 12:03:15 compute-0 nova_compute[185173]: 2026-01-23 12:03:15.860 185177 DEBUG nova.virt.libvirt.driver [None req-78ffecd2-0828-4344-8eec-4895263ee4ec d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 23 12:03:15 compute-0 nova_compute[185173]: 2026-01-23 12:03:15.861 185177 DEBUG nova.virt.hardware [None req-78ffecd2-0828-4344-8eec-4895263ee4ec d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-23T12:03:04Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='54bc300f-401b-4402-9d1d-b4ee6fecd608',id=2,is_public=True,memory_mb=512,name='fvt_testing_flavor',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2026-01-23T12:02:57Z,direct_url=<?>,disk_format='qcow2',id=06f70bc4-7667-428f-90da-f4c7fb4cfe6a,min_disk=0,min_ram=0,name='fvt_testing_image',owner='bd16a0de2f5e4a8480a855ef0e1a3f14',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2026-01-23T12:03:02Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 23 12:03:15 compute-0 nova_compute[185173]: 2026-01-23 12:03:15.861 185177 DEBUG nova.virt.hardware [None req-78ffecd2-0828-4344-8eec-4895263ee4ec d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 23 12:03:15 compute-0 nova_compute[185173]: 2026-01-23 12:03:15.861 185177 DEBUG nova.virt.hardware [None req-78ffecd2-0828-4344-8eec-4895263ee4ec d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 23 12:03:15 compute-0 nova_compute[185173]: 2026-01-23 12:03:15.862 185177 DEBUG nova.virt.hardware [None req-78ffecd2-0828-4344-8eec-4895263ee4ec d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 23 12:03:15 compute-0 nova_compute[185173]: 2026-01-23 12:03:15.862 185177 DEBUG nova.virt.hardware [None req-78ffecd2-0828-4344-8eec-4895263ee4ec d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 23 12:03:15 compute-0 nova_compute[185173]: 2026-01-23 12:03:15.862 185177 DEBUG nova.virt.hardware [None req-78ffecd2-0828-4344-8eec-4895263ee4ec d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 23 12:03:15 compute-0 nova_compute[185173]: 2026-01-23 12:03:15.863 185177 DEBUG nova.virt.hardware [None req-78ffecd2-0828-4344-8eec-4895263ee4ec d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 23 12:03:15 compute-0 nova_compute[185173]: 2026-01-23 12:03:15.863 185177 DEBUG nova.virt.hardware [None req-78ffecd2-0828-4344-8eec-4895263ee4ec d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 23 12:03:15 compute-0 nova_compute[185173]: 2026-01-23 12:03:15.863 185177 DEBUG nova.virt.hardware [None req-78ffecd2-0828-4344-8eec-4895263ee4ec d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 23 12:03:15 compute-0 nova_compute[185173]: 2026-01-23 12:03:15.864 185177 DEBUG nova.virt.hardware [None req-78ffecd2-0828-4344-8eec-4895263ee4ec d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 23 12:03:15 compute-0 nova_compute[185173]: 2026-01-23 12:03:15.864 185177 DEBUG nova.virt.hardware [None req-78ffecd2-0828-4344-8eec-4895263ee4ec d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 23 12:03:15 compute-0 nova_compute[185173]: 2026-01-23 12:03:15.867 185177 DEBUG nova.objects.instance [None req-78ffecd2-0828-4344-8eec-4895263ee4ec d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Lazy-loading 'pci_devices' on Instance uuid fda07229-b97e-4868-9f08-7b1def0956ad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 23 12:03:16 compute-0 nova_compute[185173]: 2026-01-23 12:03:16.328 185177 DEBUG nova.virt.libvirt.driver [None req-78ffecd2-0828-4344-8eec-4895263ee4ec d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: fda07229-b97e-4868-9f08-7b1def0956ad] End _get_guest_xml xml=<domain type="kvm">
Jan 23 12:03:16 compute-0 nova_compute[185173]:   <uuid>fda07229-b97e-4868-9f08-7b1def0956ad</uuid>
Jan 23 12:03:16 compute-0 nova_compute[185173]:   <name>instance-00000005</name>
Jan 23 12:03:16 compute-0 nova_compute[185173]:   <memory>524288</memory>
Jan 23 12:03:16 compute-0 nova_compute[185173]:   <vcpu>1</vcpu>
Jan 23 12:03:16 compute-0 nova_compute[185173]:   <metadata>
Jan 23 12:03:16 compute-0 nova_compute[185173]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 23 12:03:16 compute-0 nova_compute[185173]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 23 12:03:16 compute-0 nova_compute[185173]:       <nova:name>fvt_testing_server</nova:name>
Jan 23 12:03:16 compute-0 nova_compute[185173]:       <nova:creationTime>2026-01-23 12:03:15</nova:creationTime>
Jan 23 12:03:16 compute-0 nova_compute[185173]:       <nova:flavor name="fvt_testing_flavor">
Jan 23 12:03:16 compute-0 nova_compute[185173]:         <nova:memory>512</nova:memory>
Jan 23 12:03:16 compute-0 nova_compute[185173]:         <nova:disk>1</nova:disk>
Jan 23 12:03:16 compute-0 nova_compute[185173]:         <nova:swap>0</nova:swap>
Jan 23 12:03:16 compute-0 nova_compute[185173]:         <nova:ephemeral>1</nova:ephemeral>
Jan 23 12:03:16 compute-0 nova_compute[185173]:         <nova:vcpus>1</nova:vcpus>
Jan 23 12:03:16 compute-0 nova_compute[185173]:       </nova:flavor>
Jan 23 12:03:16 compute-0 nova_compute[185173]:       <nova:owner>
Jan 23 12:03:16 compute-0 nova_compute[185173]:         <nova:user uuid="d9858533c2284846a8f0f19a1fb45045">admin</nova:user>
Jan 23 12:03:16 compute-0 nova_compute[185173]:         <nova:project uuid="bd16a0de2f5e4a8480a855ef0e1a3f14">admin</nova:project>
Jan 23 12:03:16 compute-0 nova_compute[185173]:       </nova:owner>
Jan 23 12:03:16 compute-0 nova_compute[185173]:       <nova:root type="image" uuid="06f70bc4-7667-428f-90da-f4c7fb4cfe6a"/>
Jan 23 12:03:16 compute-0 nova_compute[185173]:       <nova:ports/>
Jan 23 12:03:16 compute-0 nova_compute[185173]:     </nova:instance>
Jan 23 12:03:16 compute-0 nova_compute[185173]:   </metadata>
Jan 23 12:03:16 compute-0 nova_compute[185173]:   <sysinfo type="smbios">
Jan 23 12:03:16 compute-0 nova_compute[185173]:     <system>
Jan 23 12:03:16 compute-0 nova_compute[185173]:       <entry name="manufacturer">RDO</entry>
Jan 23 12:03:16 compute-0 nova_compute[185173]:       <entry name="product">OpenStack Compute</entry>
Jan 23 12:03:16 compute-0 nova_compute[185173]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 23 12:03:16 compute-0 nova_compute[185173]:       <entry name="serial">fda07229-b97e-4868-9f08-7b1def0956ad</entry>
Jan 23 12:03:16 compute-0 nova_compute[185173]:       <entry name="uuid">fda07229-b97e-4868-9f08-7b1def0956ad</entry>
Jan 23 12:03:16 compute-0 nova_compute[185173]:       <entry name="family">Virtual Machine</entry>
Jan 23 12:03:16 compute-0 nova_compute[185173]:     </system>
Jan 23 12:03:16 compute-0 nova_compute[185173]:   </sysinfo>
Jan 23 12:03:16 compute-0 nova_compute[185173]:   <os>
Jan 23 12:03:16 compute-0 nova_compute[185173]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 23 12:03:16 compute-0 nova_compute[185173]:     <boot dev="hd"/>
Jan 23 12:03:16 compute-0 nova_compute[185173]:     <smbios mode="sysinfo"/>
Jan 23 12:03:16 compute-0 nova_compute[185173]:   </os>
Jan 23 12:03:16 compute-0 nova_compute[185173]:   <features>
Jan 23 12:03:16 compute-0 nova_compute[185173]:     <acpi/>
Jan 23 12:03:16 compute-0 nova_compute[185173]:     <apic/>
Jan 23 12:03:16 compute-0 nova_compute[185173]:     <vmcoreinfo/>
Jan 23 12:03:16 compute-0 nova_compute[185173]:   </features>
Jan 23 12:03:16 compute-0 nova_compute[185173]:   <clock offset="utc">
Jan 23 12:03:16 compute-0 nova_compute[185173]:     <timer name="pit" tickpolicy="delay"/>
Jan 23 12:03:16 compute-0 nova_compute[185173]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 23 12:03:16 compute-0 nova_compute[185173]:     <timer name="hpet" present="no"/>
Jan 23 12:03:16 compute-0 nova_compute[185173]:   </clock>
Jan 23 12:03:16 compute-0 nova_compute[185173]:   <cpu mode="host-model" match="exact">
Jan 23 12:03:16 compute-0 nova_compute[185173]:     <topology sockets="1" cores="1" threads="1"/>
Jan 23 12:03:16 compute-0 nova_compute[185173]:   </cpu>
Jan 23 12:03:16 compute-0 nova_compute[185173]:   <devices>
Jan 23 12:03:16 compute-0 nova_compute[185173]:     <disk type="file" device="disk">
Jan 23 12:03:16 compute-0 nova_compute[185173]:       <driver name="qemu" type="qcow2" cache="none"/>
Jan 23 12:03:16 compute-0 nova_compute[185173]:       <source file="/var/lib/nova/instances/fda07229-b97e-4868-9f08-7b1def0956ad/disk"/>
Jan 23 12:03:16 compute-0 nova_compute[185173]:       <target dev="vda" bus="virtio"/>
Jan 23 12:03:16 compute-0 nova_compute[185173]:     </disk>
Jan 23 12:03:16 compute-0 nova_compute[185173]:     <disk type="file" device="disk">
Jan 23 12:03:16 compute-0 nova_compute[185173]:       <driver name="qemu" type="qcow2" cache="none"/>
Jan 23 12:03:16 compute-0 nova_compute[185173]:       <source file="/var/lib/nova/instances/fda07229-b97e-4868-9f08-7b1def0956ad/disk.eph0"/>
Jan 23 12:03:16 compute-0 nova_compute[185173]:       <target dev="vdb" bus="virtio"/>
Jan 23 12:03:16 compute-0 nova_compute[185173]:     </disk>
Jan 23 12:03:16 compute-0 nova_compute[185173]:     <disk type="file" device="cdrom">
Jan 23 12:03:16 compute-0 nova_compute[185173]:       <driver name="qemu" type="raw" cache="none"/>
Jan 23 12:03:16 compute-0 nova_compute[185173]:       <source file="/var/lib/nova/instances/fda07229-b97e-4868-9f08-7b1def0956ad/disk.config"/>
Jan 23 12:03:16 compute-0 nova_compute[185173]:       <target dev="sda" bus="sata"/>
Jan 23 12:03:16 compute-0 nova_compute[185173]:     </disk>
Jan 23 12:03:16 compute-0 nova_compute[185173]:     <serial type="pty">
Jan 23 12:03:16 compute-0 nova_compute[185173]:       <log file="/var/lib/nova/instances/fda07229-b97e-4868-9f08-7b1def0956ad/console.log" append="off"/>
Jan 23 12:03:16 compute-0 nova_compute[185173]:     </serial>
Jan 23 12:03:16 compute-0 nova_compute[185173]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 23 12:03:16 compute-0 nova_compute[185173]:     <video>
Jan 23 12:03:16 compute-0 nova_compute[185173]:       <model type="virtio"/>
Jan 23 12:03:16 compute-0 nova_compute[185173]:     </video>
Jan 23 12:03:16 compute-0 nova_compute[185173]:     <input type="tablet" bus="usb"/>
Jan 23 12:03:16 compute-0 nova_compute[185173]:     <rng model="virtio">
Jan 23 12:03:16 compute-0 nova_compute[185173]:       <backend model="random">/dev/urandom</backend>
Jan 23 12:03:16 compute-0 nova_compute[185173]:     </rng>
Jan 23 12:03:16 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root"/>
Jan 23 12:03:16 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 12:03:16 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 12:03:16 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 12:03:16 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 12:03:16 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 12:03:16 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 12:03:16 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 12:03:16 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 12:03:16 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 12:03:16 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 12:03:16 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 12:03:16 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 12:03:16 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 12:03:16 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 12:03:16 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 12:03:16 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 12:03:16 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 12:03:16 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 12:03:16 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 12:03:16 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 12:03:16 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 12:03:16 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 12:03:16 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 12:03:16 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 12:03:16 compute-0 nova_compute[185173]:     <controller type="usb" index="0"/>
Jan 23 12:03:16 compute-0 nova_compute[185173]:     <memballoon model="virtio">
Jan 23 12:03:16 compute-0 nova_compute[185173]:       <stats period="10"/>
Jan 23 12:03:16 compute-0 nova_compute[185173]:     </memballoon>
Jan 23 12:03:16 compute-0 nova_compute[185173]:   </devices>
Jan 23 12:03:16 compute-0 nova_compute[185173]: </domain>
Jan 23 12:03:16 compute-0 nova_compute[185173]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 23 12:03:16 compute-0 nova_compute[185173]: 2026-01-23 12:03:16.508 185177 DEBUG nova.virt.libvirt.driver [None req-78ffecd2-0828-4344-8eec-4895263ee4ec d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 23 12:03:16 compute-0 nova_compute[185173]: 2026-01-23 12:03:16.509 185177 DEBUG nova.virt.libvirt.driver [None req-78ffecd2-0828-4344-8eec-4895263ee4ec d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 23 12:03:16 compute-0 nova_compute[185173]: 2026-01-23 12:03:16.509 185177 DEBUG nova.virt.libvirt.driver [None req-78ffecd2-0828-4344-8eec-4895263ee4ec d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 23 12:03:16 compute-0 nova_compute[185173]: 2026-01-23 12:03:16.510 185177 INFO nova.virt.libvirt.driver [None req-78ffecd2-0828-4344-8eec-4895263ee4ec d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: fda07229-b97e-4868-9f08-7b1def0956ad] Using config drive
Jan 23 12:03:16 compute-0 podman[244858]: 2026-01-23 12:03:16.770617283 +0000 UTC m=+0.101616242 container health_status 1cc877fed4914980324cf4c0d6ba23743fd113442cee4d49cc1a59e402757170 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 12:03:17 compute-0 nova_compute[185173]: 2026-01-23 12:03:17.050 185177 INFO nova.virt.libvirt.driver [None req-78ffecd2-0828-4344-8eec-4895263ee4ec d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: fda07229-b97e-4868-9f08-7b1def0956ad] Creating config drive at /var/lib/nova/instances/fda07229-b97e-4868-9f08-7b1def0956ad/disk.config
Jan 23 12:03:17 compute-0 nova_compute[185173]: 2026-01-23 12:03:17.056 185177 DEBUG oslo_concurrency.processutils [None req-78ffecd2-0828-4344-8eec-4895263ee4ec d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/fda07229-b97e-4868-9f08-7b1def0956ad/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbvayrmym execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 12:03:17 compute-0 nova_compute[185173]: 2026-01-23 12:03:17.182 185177 DEBUG oslo_concurrency.processutils [None req-78ffecd2-0828-4344-8eec-4895263ee4ec d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/fda07229-b97e-4868-9f08-7b1def0956ad/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbvayrmym" returned: 0 in 0.125s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 12:03:17 compute-0 systemd-machined[156550]: New machine qemu-5-instance-00000005.
Jan 23 12:03:17 compute-0 systemd[1]: Started Virtual Machine qemu-5-instance-00000005.
Jan 23 12:03:17 compute-0 nova_compute[185173]: 2026-01-23 12:03:17.576 185177 DEBUG nova.virt.driver [None req-161a4fc3-746d-453a-ae54-40285a29550b - - - - - -] Emitting event <LifecycleEvent: 1769169797.575396, fda07229-b97e-4868-9f08-7b1def0956ad => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 23 12:03:17 compute-0 nova_compute[185173]: 2026-01-23 12:03:17.577 185177 INFO nova.compute.manager [None req-161a4fc3-746d-453a-ae54-40285a29550b - - - - - -] [instance: fda07229-b97e-4868-9f08-7b1def0956ad] VM Resumed (Lifecycle Event)
Jan 23 12:03:17 compute-0 nova_compute[185173]: 2026-01-23 12:03:17.579 185177 DEBUG nova.compute.manager [None req-78ffecd2-0828-4344-8eec-4895263ee4ec d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: fda07229-b97e-4868-9f08-7b1def0956ad] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 23 12:03:17 compute-0 nova_compute[185173]: 2026-01-23 12:03:17.580 185177 DEBUG nova.virt.libvirt.driver [None req-78ffecd2-0828-4344-8eec-4895263ee4ec d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: fda07229-b97e-4868-9f08-7b1def0956ad] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 23 12:03:17 compute-0 nova_compute[185173]: 2026-01-23 12:03:17.584 185177 INFO nova.virt.libvirt.driver [-] [instance: fda07229-b97e-4868-9f08-7b1def0956ad] Instance spawned successfully.
Jan 23 12:03:17 compute-0 nova_compute[185173]: 2026-01-23 12:03:17.584 185177 DEBUG nova.virt.libvirt.driver [None req-78ffecd2-0828-4344-8eec-4895263ee4ec d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: fda07229-b97e-4868-9f08-7b1def0956ad] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 23 12:03:17 compute-0 nova_compute[185173]: 2026-01-23 12:03:17.603 185177 DEBUG nova.compute.manager [None req-161a4fc3-746d-453a-ae54-40285a29550b - - - - - -] [instance: fda07229-b97e-4868-9f08-7b1def0956ad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 23 12:03:17 compute-0 nova_compute[185173]: 2026-01-23 12:03:17.612 185177 DEBUG nova.compute.manager [None req-161a4fc3-746d-453a-ae54-40285a29550b - - - - - -] [instance: fda07229-b97e-4868-9f08-7b1def0956ad] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 23 12:03:17 compute-0 nova_compute[185173]: 2026-01-23 12:03:17.616 185177 DEBUG nova.virt.libvirt.driver [None req-78ffecd2-0828-4344-8eec-4895263ee4ec d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: fda07229-b97e-4868-9f08-7b1def0956ad] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 23 12:03:17 compute-0 nova_compute[185173]: 2026-01-23 12:03:17.616 185177 DEBUG nova.virt.libvirt.driver [None req-78ffecd2-0828-4344-8eec-4895263ee4ec d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: fda07229-b97e-4868-9f08-7b1def0956ad] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 23 12:03:17 compute-0 nova_compute[185173]: 2026-01-23 12:03:17.616 185177 DEBUG nova.virt.libvirt.driver [None req-78ffecd2-0828-4344-8eec-4895263ee4ec d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: fda07229-b97e-4868-9f08-7b1def0956ad] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 23 12:03:17 compute-0 nova_compute[185173]: 2026-01-23 12:03:17.617 185177 DEBUG nova.virt.libvirt.driver [None req-78ffecd2-0828-4344-8eec-4895263ee4ec d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: fda07229-b97e-4868-9f08-7b1def0956ad] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 23 12:03:17 compute-0 nova_compute[185173]: 2026-01-23 12:03:17.617 185177 DEBUG nova.virt.libvirt.driver [None req-78ffecd2-0828-4344-8eec-4895263ee4ec d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: fda07229-b97e-4868-9f08-7b1def0956ad] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 23 12:03:17 compute-0 nova_compute[185173]: 2026-01-23 12:03:17.618 185177 DEBUG nova.virt.libvirt.driver [None req-78ffecd2-0828-4344-8eec-4895263ee4ec d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: fda07229-b97e-4868-9f08-7b1def0956ad] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 23 12:03:17 compute-0 systemd[1]: Starting libvirt proxy daemon...
Jan 23 12:03:17 compute-0 nova_compute[185173]: 2026-01-23 12:03:17.653 185177 INFO nova.compute.manager [None req-161a4fc3-746d-453a-ae54-40285a29550b - - - - - -] [instance: fda07229-b97e-4868-9f08-7b1def0956ad] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 23 12:03:17 compute-0 nova_compute[185173]: 2026-01-23 12:03:17.653 185177 DEBUG nova.virt.driver [None req-161a4fc3-746d-453a-ae54-40285a29550b - - - - - -] Emitting event <LifecycleEvent: 1769169797.57711, fda07229-b97e-4868-9f08-7b1def0956ad => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 23 12:03:17 compute-0 nova_compute[185173]: 2026-01-23 12:03:17.653 185177 INFO nova.compute.manager [None req-161a4fc3-746d-453a-ae54-40285a29550b - - - - - -] [instance: fda07229-b97e-4868-9f08-7b1def0956ad] VM Started (Lifecycle Event)
Jan 23 12:03:17 compute-0 systemd[1]: Started libvirt proxy daemon.
Jan 23 12:03:17 compute-0 nova_compute[185173]: 2026-01-23 12:03:17.698 185177 DEBUG nova.compute.manager [None req-161a4fc3-746d-453a-ae54-40285a29550b - - - - - -] [instance: fda07229-b97e-4868-9f08-7b1def0956ad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 23 12:03:17 compute-0 nova_compute[185173]: 2026-01-23 12:03:17.703 185177 DEBUG nova.compute.manager [None req-161a4fc3-746d-453a-ae54-40285a29550b - - - - - -] [instance: fda07229-b97e-4868-9f08-7b1def0956ad] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 23 12:03:17 compute-0 nova_compute[185173]: 2026-01-23 12:03:17.708 185177 INFO nova.compute.manager [None req-78ffecd2-0828-4344-8eec-4895263ee4ec d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: fda07229-b97e-4868-9f08-7b1def0956ad] Took 4.07 seconds to spawn the instance on the hypervisor.
Jan 23 12:03:17 compute-0 nova_compute[185173]: 2026-01-23 12:03:17.708 185177 DEBUG nova.compute.manager [None req-78ffecd2-0828-4344-8eec-4895263ee4ec d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: fda07229-b97e-4868-9f08-7b1def0956ad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 23 12:03:17 compute-0 nova_compute[185173]: 2026-01-23 12:03:17.739 185177 INFO nova.compute.manager [None req-161a4fc3-746d-453a-ae54-40285a29550b - - - - - -] [instance: fda07229-b97e-4868-9f08-7b1def0956ad] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 23 12:03:17 compute-0 nova_compute[185173]: 2026-01-23 12:03:17.764 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:03:17 compute-0 nova_compute[185173]: 2026-01-23 12:03:17.774 185177 INFO nova.compute.manager [None req-78ffecd2-0828-4344-8eec-4895263ee4ec d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: fda07229-b97e-4868-9f08-7b1def0956ad] Took 4.55 seconds to build instance.
Jan 23 12:03:17 compute-0 nova_compute[185173]: 2026-01-23 12:03:17.791 185177 DEBUG oslo_concurrency.lockutils [None req-78ffecd2-0828-4344-8eec-4895263ee4ec d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Lock "fda07229-b97e-4868-9f08-7b1def0956ad" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.670s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 12:03:18 compute-0 nova_compute[185173]: 2026-01-23 12:03:18.235 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:03:18 compute-0 nova_compute[185173]: 2026-01-23 12:03:18.601 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:03:21 compute-0 podman[244932]: 2026-01-23 12:03:21.76146238 +0000 UTC m=+0.091835393 container health_status adf529ba1b6aae11f18bcfacdd7f5850af0b6e6af2250d4a705be9c346f3f5af (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 23 12:03:22 compute-0 podman[244951]: 2026-01-23 12:03:22.739349501 +0000 UTC m=+0.072246265 container health_status 900ef841977ab427bb05b895d10e0cac749b9185cccc7bb7aaf2b3886aa6449a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, build-date=2024-09-18T21:23:30, config_id=kepler, distribution-scope=public, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.expose-services=, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, managed_by=edpm_ansible, com.redhat.component=ubi9-container, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.buildah.version=1.29.0, release=1214.1726694543, vcs-type=git)
Jan 23 12:03:22 compute-0 nova_compute[185173]: 2026-01-23 12:03:22.767 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:03:23 compute-0 nova_compute[185173]: 2026-01-23 12:03:23.603 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:03:26 compute-0 podman[244971]: 2026-01-23 12:03:26.734309418 +0000 UTC m=+0.062734852 container health_status 99ee297e6e25b500e7af118e58bbafc761d2fd7202cdfcf4c976c2a99866b5ef (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 23 12:03:27 compute-0 nova_compute[185173]: 2026-01-23 12:03:27.768 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:03:28 compute-0 nova_compute[185173]: 2026-01-23 12:03:28.605 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:03:28 compute-0 nova_compute[185173]: 2026-01-23 12:03:28.941 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:03:28 compute-0 nova_compute[185173]: 2026-01-23 12:03:28.941 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:03:28 compute-0 nova_compute[185173]: 2026-01-23 12:03:28.942 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 23 12:03:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:03:29.118 106832 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 12:03:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:03:29.119 106832 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 12:03:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:03:29.119 106832 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 12:03:29 compute-0 nova_compute[185173]: 2026-01-23 12:03:29.291 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Acquiring lock "refresh_cache-e9de5be9-383e-4139-a192-9a00ac9030d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 23 12:03:29 compute-0 nova_compute[185173]: 2026-01-23 12:03:29.291 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Acquired lock "refresh_cache-e9de5be9-383e-4139-a192-9a00ac9030d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 23 12:03:29 compute-0 nova_compute[185173]: 2026-01-23 12:03:29.292 185177 DEBUG nova.network.neutron [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] [instance: e9de5be9-383e-4139-a192-9a00ac9030d0] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 23 12:03:29 compute-0 podman[201022]: time="2026-01-23T12:03:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 23 12:03:29 compute-0 podman[201022]: @ - - [23/Jan/2026:12:03:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28508 "" "Go-http-client/1.1"
Jan 23 12:03:29 compute-0 podman[201022]: @ - - [23/Jan/2026:12:03:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4381 "" "Go-http-client/1.1"
Jan 23 12:03:30 compute-0 nova_compute[185173]: 2026-01-23 12:03:30.344 185177 DEBUG nova.network.neutron [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] [instance: e9de5be9-383e-4139-a192-9a00ac9030d0] Updating instance_info_cache with network_info: [{"id": "e0cab06b-811c-4fd7-a9ec-dded37a5bfcf", "address": "fa:16:3e:c3:4d:2b", "network": {"id": "9d2c33ef-0f52-43b5-80dd-899657aece53", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.35", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bd16a0de2f5e4a8480a855ef0e1a3f14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0cab06b-81", "ovs_interfaceid": "e0cab06b-811c-4fd7-a9ec-dded37a5bfcf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 23 12:03:30 compute-0 nova_compute[185173]: 2026-01-23 12:03:30.439 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Releasing lock "refresh_cache-e9de5be9-383e-4139-a192-9a00ac9030d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 23 12:03:30 compute-0 nova_compute[185173]: 2026-01-23 12:03:30.439 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] [instance: e9de5be9-383e-4139-a192-9a00ac9030d0] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 23 12:03:30 compute-0 nova_compute[185173]: 2026-01-23 12:03:30.440 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:03:30 compute-0 nova_compute[185173]: 2026-01-23 12:03:30.440 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:03:30 compute-0 nova_compute[185173]: 2026-01-23 12:03:30.441 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:03:30 compute-0 nova_compute[185173]: 2026-01-23 12:03:30.441 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:03:30 compute-0 nova_compute[185173]: 2026-01-23 12:03:30.441 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:03:30 compute-0 nova_compute[185173]: 2026-01-23 12:03:30.442 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 23 12:03:30 compute-0 nova_compute[185173]: 2026-01-23 12:03:30.442 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:03:30 compute-0 nova_compute[185173]: 2026-01-23 12:03:30.474 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 12:03:30 compute-0 nova_compute[185173]: 2026-01-23 12:03:30.474 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 12:03:30 compute-0 nova_compute[185173]: 2026-01-23 12:03:30.475 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 12:03:30 compute-0 nova_compute[185173]: 2026-01-23 12:03:30.475 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 23 12:03:30 compute-0 nova_compute[185173]: 2026-01-23 12:03:30.561 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 12:03:30 compute-0 nova_compute[185173]: 2026-01-23 12:03:30.621 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 12:03:30 compute-0 nova_compute[185173]: 2026-01-23 12:03:30.623 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 12:03:30 compute-0 nova_compute[185173]: 2026-01-23 12:03:30.681 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 12:03:30 compute-0 nova_compute[185173]: 2026-01-23 12:03:30.683 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 12:03:30 compute-0 nova_compute[185173]: 2026-01-23 12:03:30.747 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.eph0 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 12:03:30 compute-0 nova_compute[185173]: 2026-01-23 12:03:30.748 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 12:03:30 compute-0 nova_compute[185173]: 2026-01-23 12:03:30.806 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.eph0 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 12:03:30 compute-0 nova_compute[185173]: 2026-01-23 12:03:30.813 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fda07229-b97e-4868-9f08-7b1def0956ad/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 12:03:30 compute-0 nova_compute[185173]: 2026-01-23 12:03:30.872 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fda07229-b97e-4868-9f08-7b1def0956ad/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 12:03:30 compute-0 nova_compute[185173]: 2026-01-23 12:03:30.883 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fda07229-b97e-4868-9f08-7b1def0956ad/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 12:03:30 compute-0 nova_compute[185173]: 2026-01-23 12:03:30.940 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fda07229-b97e-4868-9f08-7b1def0956ad/disk --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 12:03:30 compute-0 nova_compute[185173]: 2026-01-23 12:03:30.942 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fda07229-b97e-4868-9f08-7b1def0956ad/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 12:03:31 compute-0 nova_compute[185173]: 2026-01-23 12:03:31.001 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fda07229-b97e-4868-9f08-7b1def0956ad/disk.eph0 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 12:03:31 compute-0 nova_compute[185173]: 2026-01-23 12:03:31.003 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fda07229-b97e-4868-9f08-7b1def0956ad/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 12:03:31 compute-0 nova_compute[185173]: 2026-01-23 12:03:31.060 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fda07229-b97e-4868-9f08-7b1def0956ad/disk.eph0 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 12:03:31 compute-0 nova_compute[185173]: 2026-01-23 12:03:31.066 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e9de5be9-383e-4139-a192-9a00ac9030d0/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 12:03:31 compute-0 nova_compute[185173]: 2026-01-23 12:03:31.129 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e9de5be9-383e-4139-a192-9a00ac9030d0/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 12:03:31 compute-0 nova_compute[185173]: 2026-01-23 12:03:31.130 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e9de5be9-383e-4139-a192-9a00ac9030d0/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 12:03:31 compute-0 nova_compute[185173]: 2026-01-23 12:03:31.229 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e9de5be9-383e-4139-a192-9a00ac9030d0/disk --force-share --output=json" returned: 0 in 0.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 12:03:31 compute-0 nova_compute[185173]: 2026-01-23 12:03:31.230 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e9de5be9-383e-4139-a192-9a00ac9030d0/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 12:03:31 compute-0 nova_compute[185173]: 2026-01-23 12:03:31.291 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e9de5be9-383e-4139-a192-9a00ac9030d0/disk.eph0 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 12:03:31 compute-0 nova_compute[185173]: 2026-01-23 12:03:31.292 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e9de5be9-383e-4139-a192-9a00ac9030d0/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 12:03:31 compute-0 nova_compute[185173]: 2026-01-23 12:03:31.351 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e9de5be9-383e-4139-a192-9a00ac9030d0/disk.eph0 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 12:03:31 compute-0 openstack_network_exporter[204160]: ERROR   12:03:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 23 12:03:31 compute-0 openstack_network_exporter[204160]: 
Jan 23 12:03:31 compute-0 openstack_network_exporter[204160]: ERROR   12:03:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 23 12:03:31 compute-0 openstack_network_exporter[204160]: 
Jan 23 12:03:31 compute-0 nova_compute[185173]: 2026-01-23 12:03:31.670 185177 WARNING nova.virt.libvirt.driver [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 23 12:03:31 compute-0 nova_compute[185173]: 2026-01-23 12:03:31.671 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4763MB free_disk=72.37156677246094GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 23 12:03:31 compute-0 nova_compute[185173]: 2026-01-23 12:03:31.672 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 12:03:31 compute-0 nova_compute[185173]: 2026-01-23 12:03:31.672 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 12:03:31 compute-0 nova_compute[185173]: 2026-01-23 12:03:31.930 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Instance 55846fbf-a87a-4cba-be0b-23125d3d9ef4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 23 12:03:31 compute-0 nova_compute[185173]: 2026-01-23 12:03:31.930 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Instance e9de5be9-383e-4139-a192-9a00ac9030d0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 23 12:03:31 compute-0 nova_compute[185173]: 2026-01-23 12:03:31.931 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Instance fda07229-b97e-4868-9f08-7b1def0956ad actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 23 12:03:31 compute-0 nova_compute[185173]: 2026-01-23 12:03:31.931 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 23 12:03:31 compute-0 nova_compute[185173]: 2026-01-23 12:03:31.931 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2048MB phys_disk=79GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 23 12:03:32 compute-0 nova_compute[185173]: 2026-01-23 12:03:32.103 185177 DEBUG nova.compute.provider_tree [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Inventory has not changed in ProviderTree for provider: 77dd020c-2f5c-40b0-b660-8a95a28aabbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 23 12:03:32 compute-0 nova_compute[185173]: 2026-01-23 12:03:32.116 185177 DEBUG nova.scheduler.client.report [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Inventory has not changed for provider 77dd020c-2f5c-40b0-b660-8a95a28aabbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 23 12:03:32 compute-0 nova_compute[185173]: 2026-01-23 12:03:32.148 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 23 12:03:32 compute-0 nova_compute[185173]: 2026-01-23 12:03:32.149 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.477s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 12:03:32 compute-0 nova_compute[185173]: 2026-01-23 12:03:32.149 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:03:32 compute-0 nova_compute[185173]: 2026-01-23 12:03:32.150 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 23 12:03:32 compute-0 nova_compute[185173]: 2026-01-23 12:03:32.771 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:03:33 compute-0 nova_compute[185173]: 2026-01-23 12:03:33.607 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:03:34 compute-0 podman[245031]: 2026-01-23 12:03:34.734759519 +0000 UTC m=+0.066551776 container health_status cde20f10ae383cce1365a41265bac0a75ea71c31a21a1539f187bef9d678e8d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, name=ubi9-minimal, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, config_id=openstack_network_exporter, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, vendor=Red Hat, Inc., version=9.6, architecture=x86_64, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350)
Jan 23 12:03:36 compute-0 nova_compute[185173]: 2026-01-23 12:03:36.473 185177 DEBUG oslo_concurrency.lockutils [None req-ea722a42-dc5e-4120-aaa4-3713b703d947 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Acquiring lock "fda07229-b97e-4868-9f08-7b1def0956ad" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 12:03:36 compute-0 nova_compute[185173]: 2026-01-23 12:03:36.473 185177 DEBUG oslo_concurrency.lockutils [None req-ea722a42-dc5e-4120-aaa4-3713b703d947 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Lock "fda07229-b97e-4868-9f08-7b1def0956ad" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 12:03:36 compute-0 nova_compute[185173]: 2026-01-23 12:03:36.474 185177 DEBUG oslo_concurrency.lockutils [None req-ea722a42-dc5e-4120-aaa4-3713b703d947 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Acquiring lock "fda07229-b97e-4868-9f08-7b1def0956ad-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 12:03:36 compute-0 nova_compute[185173]: 2026-01-23 12:03:36.474 185177 DEBUG oslo_concurrency.lockutils [None req-ea722a42-dc5e-4120-aaa4-3713b703d947 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Lock "fda07229-b97e-4868-9f08-7b1def0956ad-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 12:03:36 compute-0 nova_compute[185173]: 2026-01-23 12:03:36.475 185177 DEBUG oslo_concurrency.lockutils [None req-ea722a42-dc5e-4120-aaa4-3713b703d947 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Lock "fda07229-b97e-4868-9f08-7b1def0956ad-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 12:03:36 compute-0 nova_compute[185173]: 2026-01-23 12:03:36.476 185177 INFO nova.compute.manager [None req-ea722a42-dc5e-4120-aaa4-3713b703d947 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: fda07229-b97e-4868-9f08-7b1def0956ad] Terminating instance
Jan 23 12:03:36 compute-0 nova_compute[185173]: 2026-01-23 12:03:36.477 185177 DEBUG oslo_concurrency.lockutils [None req-ea722a42-dc5e-4120-aaa4-3713b703d947 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Acquiring lock "refresh_cache-fda07229-b97e-4868-9f08-7b1def0956ad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 23 12:03:36 compute-0 nova_compute[185173]: 2026-01-23 12:03:36.478 185177 DEBUG oslo_concurrency.lockutils [None req-ea722a42-dc5e-4120-aaa4-3713b703d947 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Acquired lock "refresh_cache-fda07229-b97e-4868-9f08-7b1def0956ad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 23 12:03:36 compute-0 nova_compute[185173]: 2026-01-23 12:03:36.478 185177 DEBUG nova.network.neutron [None req-ea722a42-dc5e-4120-aaa4-3713b703d947 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: fda07229-b97e-4868-9f08-7b1def0956ad] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 23 12:03:37 compute-0 nova_compute[185173]: 2026-01-23 12:03:37.079 185177 DEBUG nova.network.neutron [None req-ea722a42-dc5e-4120-aaa4-3713b703d947 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: fda07229-b97e-4868-9f08-7b1def0956ad] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 23 12:03:37 compute-0 nova_compute[185173]: 2026-01-23 12:03:37.250 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:03:37 compute-0 nova_compute[185173]: 2026-01-23 12:03:37.250 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 23 12:03:37 compute-0 nova_compute[185173]: 2026-01-23 12:03:37.434 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 23 12:03:37 compute-0 nova_compute[185173]: 2026-01-23 12:03:37.475 185177 DEBUG nova.network.neutron [None req-ea722a42-dc5e-4120-aaa4-3713b703d947 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: fda07229-b97e-4868-9f08-7b1def0956ad] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 23 12:03:37 compute-0 nova_compute[185173]: 2026-01-23 12:03:37.548 185177 DEBUG oslo_concurrency.lockutils [None req-ea722a42-dc5e-4120-aaa4-3713b703d947 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Releasing lock "refresh_cache-fda07229-b97e-4868-9f08-7b1def0956ad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 23 12:03:37 compute-0 nova_compute[185173]: 2026-01-23 12:03:37.549 185177 DEBUG nova.compute.manager [None req-ea722a42-dc5e-4120-aaa4-3713b703d947 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: fda07229-b97e-4868-9f08-7b1def0956ad] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 23 12:03:37 compute-0 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000005.scope: Deactivated successfully.
Jan 23 12:03:37 compute-0 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000005.scope: Consumed 20.593s CPU time.
Jan 23 12:03:37 compute-0 systemd-machined[156550]: Machine qemu-5-instance-00000005 terminated.
Jan 23 12:03:37 compute-0 nova_compute[185173]: 2026-01-23 12:03:37.773 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:03:37 compute-0 nova_compute[185173]: 2026-01-23 12:03:37.824 185177 INFO nova.virt.libvirt.driver [-] [instance: fda07229-b97e-4868-9f08-7b1def0956ad] Instance destroyed successfully.
Jan 23 12:03:37 compute-0 nova_compute[185173]: 2026-01-23 12:03:37.825 185177 DEBUG nova.objects.instance [None req-ea722a42-dc5e-4120-aaa4-3713b703d947 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Lazy-loading 'resources' on Instance uuid fda07229-b97e-4868-9f08-7b1def0956ad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 23 12:03:38 compute-0 nova_compute[185173]: 2026-01-23 12:03:38.472 185177 INFO nova.virt.libvirt.driver [None req-ea722a42-dc5e-4120-aaa4-3713b703d947 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: fda07229-b97e-4868-9f08-7b1def0956ad] Deleting instance files /var/lib/nova/instances/fda07229-b97e-4868-9f08-7b1def0956ad_del
Jan 23 12:03:38 compute-0 nova_compute[185173]: 2026-01-23 12:03:38.473 185177 INFO nova.virt.libvirt.driver [None req-ea722a42-dc5e-4120-aaa4-3713b703d947 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: fda07229-b97e-4868-9f08-7b1def0956ad] Deletion of /var/lib/nova/instances/fda07229-b97e-4868-9f08-7b1def0956ad_del complete
Jan 23 12:03:38 compute-0 nova_compute[185173]: 2026-01-23 12:03:38.609 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:03:38 compute-0 nova_compute[185173]: 2026-01-23 12:03:38.686 185177 INFO nova.compute.manager [None req-ea722a42-dc5e-4120-aaa4-3713b703d947 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: fda07229-b97e-4868-9f08-7b1def0956ad] Took 1.14 seconds to destroy the instance on the hypervisor.
Jan 23 12:03:38 compute-0 nova_compute[185173]: 2026-01-23 12:03:38.688 185177 DEBUG oslo.service.loopingcall [None req-ea722a42-dc5e-4120-aaa4-3713b703d947 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 23 12:03:38 compute-0 nova_compute[185173]: 2026-01-23 12:03:38.688 185177 DEBUG nova.compute.manager [-] [instance: fda07229-b97e-4868-9f08-7b1def0956ad] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 23 12:03:38 compute-0 nova_compute[185173]: 2026-01-23 12:03:38.689 185177 DEBUG nova.network.neutron [-] [instance: fda07229-b97e-4868-9f08-7b1def0956ad] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 23 12:03:39 compute-0 nova_compute[185173]: 2026-01-23 12:03:39.307 185177 DEBUG nova.network.neutron [-] [instance: fda07229-b97e-4868-9f08-7b1def0956ad] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 23 12:03:39 compute-0 nova_compute[185173]: 2026-01-23 12:03:39.323 185177 DEBUG nova.network.neutron [-] [instance: fda07229-b97e-4868-9f08-7b1def0956ad] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 23 12:03:39 compute-0 nova_compute[185173]: 2026-01-23 12:03:39.338 185177 INFO nova.compute.manager [-] [instance: fda07229-b97e-4868-9f08-7b1def0956ad] Took 0.65 seconds to deallocate network for instance.
Jan 23 12:03:39 compute-0 nova_compute[185173]: 2026-01-23 12:03:39.370 185177 DEBUG oslo_concurrency.lockutils [None req-ea722a42-dc5e-4120-aaa4-3713b703d947 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 12:03:39 compute-0 nova_compute[185173]: 2026-01-23 12:03:39.371 185177 DEBUG oslo_concurrency.lockutils [None req-ea722a42-dc5e-4120-aaa4-3713b703d947 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 12:03:39 compute-0 nova_compute[185173]: 2026-01-23 12:03:39.467 185177 DEBUG nova.compute.provider_tree [None req-ea722a42-dc5e-4120-aaa4-3713b703d947 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Inventory has not changed in ProviderTree for provider: 77dd020c-2f5c-40b0-b660-8a95a28aabbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 23 12:03:39 compute-0 nova_compute[185173]: 2026-01-23 12:03:39.482 185177 DEBUG nova.scheduler.client.report [None req-ea722a42-dc5e-4120-aaa4-3713b703d947 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Inventory has not changed for provider 77dd020c-2f5c-40b0-b660-8a95a28aabbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 23 12:03:39 compute-0 nova_compute[185173]: 2026-01-23 12:03:39.505 185177 DEBUG oslo_concurrency.lockutils [None req-ea722a42-dc5e-4120-aaa4-3713b703d947 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.134s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 12:03:39 compute-0 nova_compute[185173]: 2026-01-23 12:03:39.550 185177 INFO nova.scheduler.client.report [None req-ea722a42-dc5e-4120-aaa4-3713b703d947 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Deleted allocations for instance fda07229-b97e-4868-9f08-7b1def0956ad
Jan 23 12:03:39 compute-0 nova_compute[185173]: 2026-01-23 12:03:39.774 185177 DEBUG oslo_concurrency.lockutils [None req-ea722a42-dc5e-4120-aaa4-3713b703d947 d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Lock "fda07229-b97e-4868-9f08-7b1def0956ad" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.301s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 12:03:42 compute-0 podman[245067]: 2026-01-23 12:03:42.75058086 +0000 UTC m=+0.070202915 container health_status d96827cd9c29e53bbdf4cef10942608e4ba405294733072b4aa624c0238e2ed8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 23 12:03:42 compute-0 podman[245066]: 2026-01-23 12:03:42.754139536 +0000 UTC m=+0.076335294 container health_status 6ec039018dddd109dd56b3f3912ce4a80c166b5fb98c417c5e3cfbbdfbfbeaad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, org.label-schema.build-date=20260120, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, tcib_managed=true)
Jan 23 12:03:42 compute-0 podman[245065]: 2026-01-23 12:03:42.776436391 +0000 UTC m=+0.094811406 container health_status 48bfd3e93cfb033a8917f154ab637a84f3f60f7609564292c230ce848bae7693 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Jan 23 12:03:42 compute-0 nova_compute[185173]: 2026-01-23 12:03:42.776 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:03:43 compute-0 nova_compute[185173]: 2026-01-23 12:03:43.613 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:03:47 compute-0 nova_compute[185173]: 2026-01-23 12:03:47.776 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:03:47 compute-0 podman[245127]: 2026-01-23 12:03:47.779045592 +0000 UTC m=+0.111733679 container health_status 1cc877fed4914980324cf4c0d6ba23743fd113442cee4d49cc1a59e402757170 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Jan 23 12:03:48 compute-0 nova_compute[185173]: 2026-01-23 12:03:48.615 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:03:52 compute-0 podman[245154]: 2026-01-23 12:03:52.77832475 +0000 UTC m=+0.103231431 container health_status adf529ba1b6aae11f18bcfacdd7f5850af0b6e6af2250d4a705be9c346f3f5af (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 23 12:03:52 compute-0 nova_compute[185173]: 2026-01-23 12:03:52.779 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:03:52 compute-0 nova_compute[185173]: 2026-01-23 12:03:52.821 185177 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769169817.819922, fda07229-b97e-4868-9f08-7b1def0956ad => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 23 12:03:52 compute-0 nova_compute[185173]: 2026-01-23 12:03:52.821 185177 INFO nova.compute.manager [-] [instance: fda07229-b97e-4868-9f08-7b1def0956ad] VM Stopped (Lifecycle Event)
Jan 23 12:03:52 compute-0 nova_compute[185173]: 2026-01-23 12:03:52.859 185177 DEBUG nova.compute.manager [None req-6a494acf-7fd1-46b6-9ef0-b9fb408a0516 - - - - - -] [instance: fda07229-b97e-4868-9f08-7b1def0956ad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 23 12:03:52 compute-0 podman[245174]: 2026-01-23 12:03:52.880212287 +0000 UTC m=+0.069400735 container health_status 900ef841977ab427bb05b895d10e0cac749b9185cccc7bb7aaf2b3886aa6449a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, com.redhat.component=ubi9-container, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, config_id=kepler, io.buildah.version=1.29.0, distribution-scope=public, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, architecture=x86_64, maintainer=Red Hat, Inc., release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.4, vendor=Red Hat, Inc., io.openshift.expose-services=)
Jan 23 12:03:53 compute-0 sshd-session[244491]: Received disconnect from 38.102.83.196 port 60420:11: disconnected by user
Jan 23 12:03:53 compute-0 sshd-session[244491]: Disconnected from user zuul 38.102.83.196 port 60420
Jan 23 12:03:53 compute-0 sshd-session[244478]: pam_unix(sshd:session): session closed for user zuul
Jan 23 12:03:53 compute-0 systemd[1]: session-30.scope: Deactivated successfully.
Jan 23 12:03:53 compute-0 systemd-logind[798]: Session 30 logged out. Waiting for processes to exit.
Jan 23 12:03:53 compute-0 systemd-logind[798]: Removed session 30.
Jan 23 12:03:53 compute-0 nova_compute[185173]: 2026-01-23 12:03:53.618 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:03:57 compute-0 podman[245195]: 2026-01-23 12:03:57.728972034 +0000 UTC m=+0.060899268 container health_status 99ee297e6e25b500e7af118e58bbafc761d2fd7202cdfcf4c976c2a99866b5ef (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 23 12:03:57 compute-0 nova_compute[185173]: 2026-01-23 12:03:57.781 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:03:58 compute-0 nova_compute[185173]: 2026-01-23 12:03:58.619 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:03:59 compute-0 podman[201022]: time="2026-01-23T12:03:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 23 12:03:59 compute-0 podman[201022]: @ - - [23/Jan/2026:12:03:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28508 "" "Go-http-client/1.1"
Jan 23 12:03:59 compute-0 podman[201022]: @ - - [23/Jan/2026:12:03:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4380 "" "Go-http-client/1.1"
Jan 23 12:04:01 compute-0 openstack_network_exporter[204160]: ERROR   12:04:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 23 12:04:01 compute-0 openstack_network_exporter[204160]: 
Jan 23 12:04:01 compute-0 openstack_network_exporter[204160]: ERROR   12:04:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 23 12:04:01 compute-0 openstack_network_exporter[204160]: 
Jan 23 12:04:02 compute-0 nova_compute[185173]: 2026-01-23 12:04:02.784 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:04:03 compute-0 nova_compute[185173]: 2026-01-23 12:04:03.622 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:04:05 compute-0 podman[245218]: 2026-01-23 12:04:05.751717268 +0000 UTC m=+0.080793533 container health_status cde20f10ae383cce1365a41265bac0a75ea71c31a21a1539f187bef9d678e8d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, architecture=x86_64, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, release=1755695350, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., io.buildah.version=1.33.7, config_id=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 23 12:04:07 compute-0 nova_compute[185173]: 2026-01-23 12:04:07.786 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:04:08 compute-0 nova_compute[185173]: 2026-01-23 12:04:08.624 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:04:12 compute-0 nova_compute[185173]: 2026-01-23 12:04:12.790 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:04:13 compute-0 sshd-session[245238]: Accepted publickey for zuul from 38.102.83.196 port 38810 ssh2: RSA SHA256:l5/z7/B1LZInfKNQYpI40S/PX6fnGwoDdxTfZ/2+PpU
Jan 23 12:04:13 compute-0 systemd-logind[798]: New session 31 of user zuul.
Jan 23 12:04:13 compute-0 systemd[1]: Started Session 31 of User zuul.
Jan 23 12:04:13 compute-0 sshd-session[245238]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 23 12:04:13 compute-0 podman[245242]: 2026-01-23 12:04:13.159355751 +0000 UTC m=+0.077844982 container health_status 6ec039018dddd109dd56b3f3912ce4a80c166b5fb98c417c5e3cfbbdfbfbeaad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, managed_by=edpm_ansible, org.label-schema.build-date=20260120, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team)
Jan 23 12:04:13 compute-0 podman[245243]: 2026-01-23 12:04:13.1765396 +0000 UTC m=+0.083432237 container health_status d96827cd9c29e53bbdf4cef10942608e4ba405294733072b4aa624c0238e2ed8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 23 12:04:13 compute-0 podman[245240]: 2026-01-23 12:04:13.20318069 +0000 UTC m=+0.125091824 container health_status 48bfd3e93cfb033a8917f154ab637a84f3f60f7609564292c230ce848bae7693 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 23 12:04:13 compute-0 nova_compute[185173]: 2026-01-23 12:04:13.626 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:04:13 compute-0 sudo[245475]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-izfgubbqylkoqditjdtbtpljtsthqnyg ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769169853.214282-60363-101762947714864/AnsiballZ_command.py'
Jan 23 12:04:13 compute-0 sudo[245475]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 12:04:13 compute-0 python3[245477]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep node_exporter
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 12:04:13 compute-0 sudo[245475]: pam_unix(sudo:session): session closed for user root
Jan 23 12:04:15 compute-0 nova_compute[185173]: 2026-01-23 12:04:15.420 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:04:17 compute-0 nova_compute[185173]: 2026-01-23 12:04:17.792 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:04:18 compute-0 nova_compute[185173]: 2026-01-23 12:04:18.628 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:04:18 compute-0 podman[245516]: 2026-01-23 12:04:18.771589168 +0000 UTC m=+0.102194965 container health_status 1cc877fed4914980324cf4c0d6ba23743fd113442cee4d49cc1a59e402757170 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 23 12:04:21 compute-0 sudo[245716]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ttjjgwtjmfnallwhbdktgvfkndgawytr ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769169860.9567482-60527-151809659562531/AnsiballZ_command.py'
Jan 23 12:04:21 compute-0 sudo[245716]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 12:04:21 compute-0 python3[245718]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep podman_exporter
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 12:04:21 compute-0 sudo[245716]: pam_unix(sudo:session): session closed for user root
Jan 23 12:04:22 compute-0 nova_compute[185173]: 2026-01-23 12:04:22.794 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:04:23 compute-0 nova_compute[185173]: 2026-01-23 12:04:23.232 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:04:23 compute-0 nova_compute[185173]: 2026-01-23 12:04:23.629 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:04:23 compute-0 podman[245757]: 2026-01-23 12:04:23.819907693 +0000 UTC m=+0.131670385 container health_status adf529ba1b6aae11f18bcfacdd7f5850af0b6e6af2250d4a705be9c346f3f5af (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 12:04:23 compute-0 podman[245756]: 2026-01-23 12:04:23.823642684 +0000 UTC m=+0.136635536 container health_status 900ef841977ab427bb05b895d10e0cac749b9185cccc7bb7aaf2b3886aa6449a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, vendor=Red Hat, Inc., io.buildah.version=1.29.0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, build-date=2024-09-18T21:23:30, io.openshift.tags=base rhel9, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=kepler, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, distribution-scope=public, com.redhat.component=ubi9-container, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, name=ubi9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=)
Jan 23 12:04:24 compute-0 nova_compute[185173]: 2026-01-23 12:04:24.234 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:04:24 compute-0 nova_compute[185173]: 2026-01-23 12:04:24.235 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:04:24 compute-0 nova_compute[185173]: 2026-01-23 12:04:24.262 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 12:04:24 compute-0 nova_compute[185173]: 2026-01-23 12:04:24.262 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 12:04:24 compute-0 nova_compute[185173]: 2026-01-23 12:04:24.262 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 12:04:24 compute-0 nova_compute[185173]: 2026-01-23 12:04:24.263 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 23 12:04:24 compute-0 nova_compute[185173]: 2026-01-23 12:04:24.355 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 12:04:24 compute-0 nova_compute[185173]: 2026-01-23 12:04:24.442 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 12:04:24 compute-0 nova_compute[185173]: 2026-01-23 12:04:24.443 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 12:04:24 compute-0 nova_compute[185173]: 2026-01-23 12:04:24.501 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 12:04:24 compute-0 nova_compute[185173]: 2026-01-23 12:04:24.502 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 12:04:24 compute-0 nova_compute[185173]: 2026-01-23 12:04:24.564 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.eph0 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 12:04:24 compute-0 nova_compute[185173]: 2026-01-23 12:04:24.566 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 12:04:24 compute-0 nova_compute[185173]: 2026-01-23 12:04:24.642 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.eph0 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 12:04:24 compute-0 nova_compute[185173]: 2026-01-23 12:04:24.649 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e9de5be9-383e-4139-a192-9a00ac9030d0/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 12:04:24 compute-0 nova_compute[185173]: 2026-01-23 12:04:24.711 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e9de5be9-383e-4139-a192-9a00ac9030d0/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 12:04:24 compute-0 nova_compute[185173]: 2026-01-23 12:04:24.713 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e9de5be9-383e-4139-a192-9a00ac9030d0/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 12:04:24 compute-0 nova_compute[185173]: 2026-01-23 12:04:24.774 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e9de5be9-383e-4139-a192-9a00ac9030d0/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 12:04:24 compute-0 nova_compute[185173]: 2026-01-23 12:04:24.775 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e9de5be9-383e-4139-a192-9a00ac9030d0/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 12:04:24 compute-0 nova_compute[185173]: 2026-01-23 12:04:24.856 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e9de5be9-383e-4139-a192-9a00ac9030d0/disk.eph0 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 12:04:24 compute-0 nova_compute[185173]: 2026-01-23 12:04:24.857 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e9de5be9-383e-4139-a192-9a00ac9030d0/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 12:04:24 compute-0 nova_compute[185173]: 2026-01-23 12:04:24.917 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e9de5be9-383e-4139-a192-9a00ac9030d0/disk.eph0 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 12:04:25 compute-0 nova_compute[185173]: 2026-01-23 12:04:25.228 185177 WARNING nova.virt.libvirt.driver [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 23 12:04:25 compute-0 nova_compute[185173]: 2026-01-23 12:04:25.229 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4869MB free_disk=72.3719711303711GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 23 12:04:25 compute-0 nova_compute[185173]: 2026-01-23 12:04:25.229 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 12:04:25 compute-0 nova_compute[185173]: 2026-01-23 12:04:25.230 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 12:04:25 compute-0 nova_compute[185173]: 2026-01-23 12:04:25.420 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Instance 55846fbf-a87a-4cba-be0b-23125d3d9ef4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 23 12:04:25 compute-0 nova_compute[185173]: 2026-01-23 12:04:25.421 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Instance e9de5be9-383e-4139-a192-9a00ac9030d0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 23 12:04:25 compute-0 nova_compute[185173]: 2026-01-23 12:04:25.421 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 23 12:04:25 compute-0 nova_compute[185173]: 2026-01-23 12:04:25.421 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 23 12:04:25 compute-0 nova_compute[185173]: 2026-01-23 12:04:25.475 185177 DEBUG nova.compute.provider_tree [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Inventory has not changed in ProviderTree for provider: 77dd020c-2f5c-40b0-b660-8a95a28aabbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 23 12:04:25 compute-0 nova_compute[185173]: 2026-01-23 12:04:25.491 185177 DEBUG nova.scheduler.client.report [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Inventory has not changed for provider 77dd020c-2f5c-40b0-b660-8a95a28aabbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 23 12:04:25 compute-0 nova_compute[185173]: 2026-01-23 12:04:25.513 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 23 12:04:25 compute-0 nova_compute[185173]: 2026-01-23 12:04:25.513 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.283s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 12:04:27 compute-0 nova_compute[185173]: 2026-01-23 12:04:27.513 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:04:27 compute-0 nova_compute[185173]: 2026-01-23 12:04:27.796 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:04:27 compute-0 nova_compute[185173]: 2026-01-23 12:04:27.911 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:04:27 compute-0 nova_compute[185173]: 2026-01-23 12:04:27.911 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 23 12:04:27 compute-0 nova_compute[185173]: 2026-01-23 12:04:27.911 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 23 12:04:28 compute-0 nova_compute[185173]: 2026-01-23 12:04:28.631 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:04:28 compute-0 podman[245819]: 2026-01-23 12:04:28.752771091 +0000 UTC m=+0.084513635 container health_status 99ee297e6e25b500e7af118e58bbafc761d2fd7202cdfcf4c976c2a99866b5ef (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 23 12:04:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:04:29.120 106832 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 12:04:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:04:29.120 106832 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 12:04:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:04:29.121 106832 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 12:04:29 compute-0 nova_compute[185173]: 2026-01-23 12:04:29.323 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Acquiring lock "refresh_cache-55846fbf-a87a-4cba-be0b-23125d3d9ef4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 23 12:04:29 compute-0 nova_compute[185173]: 2026-01-23 12:04:29.324 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Acquired lock "refresh_cache-55846fbf-a87a-4cba-be0b-23125d3d9ef4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 23 12:04:29 compute-0 nova_compute[185173]: 2026-01-23 12:04:29.324 185177 DEBUG nova.network.neutron [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] [instance: 55846fbf-a87a-4cba-be0b-23125d3d9ef4] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 23 12:04:29 compute-0 nova_compute[185173]: 2026-01-23 12:04:29.324 185177 DEBUG nova.objects.instance [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 55846fbf-a87a-4cba-be0b-23125d3d9ef4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 23 12:04:29 compute-0 podman[201022]: time="2026-01-23T12:04:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 23 12:04:29 compute-0 podman[201022]: @ - - [23/Jan/2026:12:04:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28508 "" "Go-http-client/1.1"
Jan 23 12:04:29 compute-0 podman[201022]: @ - - [23/Jan/2026:12:04:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4380 "" "Go-http-client/1.1"
Jan 23 12:04:30 compute-0 sudo[246017]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fajxzuwvlrkbgliohnhupevzvvqruifc ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769169870.2352653-60682-220305706216625/AnsiballZ_command.py'
Jan 23 12:04:30 compute-0 sudo[246017]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 12:04:31 compute-0 python3[246019]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep kepler
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 12:04:31 compute-0 sudo[246017]: pam_unix(sudo:session): session closed for user root
Jan 23 12:04:31 compute-0 openstack_network_exporter[204160]: ERROR   12:04:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 23 12:04:31 compute-0 openstack_network_exporter[204160]: 
Jan 23 12:04:31 compute-0 openstack_network_exporter[204160]: ERROR   12:04:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 23 12:04:31 compute-0 openstack_network_exporter[204160]: 
Jan 23 12:04:32 compute-0 nova_compute[185173]: 2026-01-23 12:04:32.670 185177 DEBUG nova.network.neutron [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] [instance: 55846fbf-a87a-4cba-be0b-23125d3d9ef4] Updating instance_info_cache with network_info: [{"id": "4c18896b-ecf0-4d1b-b901-f24edce45c11", "address": "fa:16:3e:e4:21:a1", "network": {"id": "9d2c33ef-0f52-43b5-80dd-899657aece53", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.65", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bd16a0de2f5e4a8480a855ef0e1a3f14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c18896b-ec", "ovs_interfaceid": "4c18896b-ecf0-4d1b-b901-f24edce45c11", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 23 12:04:32 compute-0 nova_compute[185173]: 2026-01-23 12:04:32.688 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Releasing lock "refresh_cache-55846fbf-a87a-4cba-be0b-23125d3d9ef4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 23 12:04:32 compute-0 nova_compute[185173]: 2026-01-23 12:04:32.688 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] [instance: 55846fbf-a87a-4cba-be0b-23125d3d9ef4] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 23 12:04:32 compute-0 nova_compute[185173]: 2026-01-23 12:04:32.688 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:04:32 compute-0 nova_compute[185173]: 2026-01-23 12:04:32.689 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:04:32 compute-0 nova_compute[185173]: 2026-01-23 12:04:32.689 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:04:32 compute-0 nova_compute[185173]: 2026-01-23 12:04:32.689 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:04:32 compute-0 nova_compute[185173]: 2026-01-23 12:04:32.689 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 23 12:04:32 compute-0 nova_compute[185173]: 2026-01-23 12:04:32.798 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:04:33 compute-0 nova_compute[185173]: 2026-01-23 12:04:33.633 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:04:36 compute-0 podman[246058]: 2026-01-23 12:04:36.761240621 +0000 UTC m=+0.086291757 container health_status cde20f10ae383cce1365a41265bac0a75ea71c31a21a1539f187bef9d678e8d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., vendor=Red Hat, Inc., architecture=x86_64, io.openshift.tags=minimal rhel9, release=1755695350, vcs-type=git, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal)
Jan 23 12:04:37 compute-0 nova_compute[185173]: 2026-01-23 12:04:37.801 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:04:38 compute-0 nova_compute[185173]: 2026-01-23 12:04:38.635 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:04:42 compute-0 nova_compute[185173]: 2026-01-23 12:04:42.804 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:04:43 compute-0 nova_compute[185173]: 2026-01-23 12:04:43.637 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:04:43 compute-0 podman[246079]: 2026-01-23 12:04:43.753775408 +0000 UTC m=+0.073950136 container health_status 48bfd3e93cfb033a8917f154ab637a84f3f60f7609564292c230ce848bae7693 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 23 12:04:43 compute-0 podman[246081]: 2026-01-23 12:04:43.755206933 +0000 UTC m=+0.064396033 container health_status d96827cd9c29e53bbdf4cef10942608e4ba405294733072b4aa624c0238e2ed8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 23 12:04:43 compute-0 podman[246080]: 2026-01-23 12:04:43.767656147 +0000 UTC m=+0.084656487 container health_status 6ec039018dddd109dd56b3f3912ce4a80c166b5fb98c417c5e3cfbbdfbfbeaad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260120, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible)
Jan 23 12:04:45 compute-0 sudo[246308]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-morhyalrydbtlrggamvwxeharqdjxriy ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769169884.7271593-60902-229632218138475/AnsiballZ_command.py'
Jan 23 12:04:45 compute-0 sudo[246308]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 12:04:45 compute-0 python3[246310]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep openstack_network_exporter
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 12:04:45 compute-0 sudo[246308]: pam_unix(sudo:session): session closed for user root
Jan 23 12:04:47 compute-0 nova_compute[185173]: 2026-01-23 12:04:47.805 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:04:48 compute-0 nova_compute[185173]: 2026-01-23 12:04:48.640 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:04:49 compute-0 podman[246351]: 2026-01-23 12:04:49.778666586 +0000 UTC m=+0.101335424 container health_status 1cc877fed4914980324cf4c0d6ba23743fd113442cee4d49cc1a59e402757170 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true)
Jan 23 12:04:52 compute-0 nova_compute[185173]: 2026-01-23 12:04:52.807 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:04:53 compute-0 nova_compute[185173]: 2026-01-23 12:04:53.642 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:04:54 compute-0 podman[246378]: 2026-01-23 12:04:54.741840927 +0000 UTC m=+0.074052208 container health_status adf529ba1b6aae11f18bcfacdd7f5850af0b6e6af2250d4a705be9c346f3f5af (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 12:04:54 compute-0 podman[246377]: 2026-01-23 12:04:54.745684401 +0000 UTC m=+0.075679048 container health_status 900ef841977ab427bb05b895d10e0cac749b9185cccc7bb7aaf2b3886aa6449a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=kepler, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., vcs-type=git, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, release-0.7.12=, managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, container_name=kepler)
Jan 23 12:04:57 compute-0 nova_compute[185173]: 2026-01-23 12:04:57.809 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:04:58 compute-0 nova_compute[185173]: 2026-01-23 12:04:58.644 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:04:59 compute-0 podman[201022]: time="2026-01-23T12:04:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 23 12:04:59 compute-0 podman[201022]: @ - - [23/Jan/2026:12:04:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28508 "" "Go-http-client/1.1"
Jan 23 12:04:59 compute-0 podman[246414]: 2026-01-23 12:04:59.786645759 +0000 UTC m=+0.108592981 container health_status 99ee297e6e25b500e7af118e58bbafc761d2fd7202cdfcf4c976c2a99866b5ef (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 23 12:04:59 compute-0 podman[201022]: @ - - [23/Jan/2026:12:04:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4375 "" "Go-http-client/1.1"
Jan 23 12:05:01 compute-0 openstack_network_exporter[204160]: ERROR   12:05:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 23 12:05:01 compute-0 openstack_network_exporter[204160]: 
Jan 23 12:05:01 compute-0 openstack_network_exporter[204160]: ERROR   12:05:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 23 12:05:01 compute-0 openstack_network_exporter[204160]: 
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.456 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.457 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.457 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28432f2c60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.458 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f28410bc7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.458 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be810>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28432f2c60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.458 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be840>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28432f2c60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.458 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28432f2c60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.458 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be8a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28432f2c60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.459 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc8f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28432f2c60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.459 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be900>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28432f2c60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.459 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bf140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28432f2c60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.459 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be960>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28432f2c60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.459 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2842f61190>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28432f2c60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.459 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28411c9190>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28432f2c60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.459 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be9c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28432f2c60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.459 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bf1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28432f2c60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.459 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bec00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28432f2c60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.459 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bf440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28432f2c60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.459 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bec60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28432f2c60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.459 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2842f83560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28432f2c60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.460 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28432f2c60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.460 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc5c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28432f2c60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.460 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28432f2c60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.460 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be660>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28432f2c60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.460 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28432f2c60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.460 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28432f2c60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.460 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2842f1af60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28432f2c60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.460 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28432f2c60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.461 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be7b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28432f2c60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.464 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '55846fbf-a87a-4cba-be0b-23125d3d9ef4', 'name': 'test_0', 'flavor': {'id': 'f2c5c5dd-a580-4885-a3ab-a766eac401c8', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'c5833e41-b4db-454e-8f49-014aa18c7dc5'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'bd16a0de2f5e4a8480a855ef0e1a3f14', 'user_id': 'd9858533c2284846a8f0f19a1fb45045', 'hostId': '47f89b8956aaa9163f724166aabd4216eadbb2bd951d24f4c87e1ecb', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.469 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'e9de5be9-383e-4139-a192-9a00ac9030d0', 'name': 'vn-i4gqh4k-b64ilmmiw3co-dxxhdi3z36fs-vnf-e3wngllyc55g', 'flavor': {'id': 'f2c5c5dd-a580-4885-a3ab-a766eac401c8', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'c5833e41-b4db-454e-8f49-014aa18c7dc5'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'bd16a0de2f5e4a8480a855ef0e1a3f14', 'user_id': 'd9858533c2284846a8f0f19a1fb45045', 'hostId': '47f89b8956aaa9163f724166aabd4216eadbb2bd951d24f4c87e1ecb', 'status': 'active', 'metadata': {'metering.server_group': '500baa09-1e39-474e-b275-8b2dffe3a65b'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.470 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.471 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bc800>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.471 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bc800>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.471 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.473 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-01-23T12:05:01.471781) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.479 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.487 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.488 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.488 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f28410be7e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.488 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.489 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410be810>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.489 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410be810>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.489 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.490 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-01-23T12:05:01.489456) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.513 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.514 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.514 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.537 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/disk.device.usage volume: 21364736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.537 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.538 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.538 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.538 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f28411c9b80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.538 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.538 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410be840>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.538 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410be840>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.539 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.540 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-01-23T12:05:01.539075) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.601 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.602 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.602 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.667 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/disk.device.write.bytes volume: 41861120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.667 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.668 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.668 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.668 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f28410bc830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.668 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.669 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f28410be870>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.669 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.669 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410be8a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.669 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410be8a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.669 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.669 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.write.latency volume: 1669208630 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.669 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.write.latency volume: 8106790 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.670 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.670 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/disk.device.write.latency volume: 600800165 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.670 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/disk.device.write.latency volume: 7490744 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.670 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.671 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-01-23T12:05:01.669371) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.671 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.671 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f28410bc8c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.671 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.671 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bc8f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.671 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bc8f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.671 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.671 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.671 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-01-23T12:05:01.671637) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.672 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.672 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.672 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f28410be8d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.672 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.672 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410be900>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.672 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410be900>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.672 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.673 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.write.requests volume: 234 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.673 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.673 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-01-23T12:05:01.672906) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.673 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.673 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/disk.device.write.requests volume: 236 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.673 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.674 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.674 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.674 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f28410bef30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.674 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.674 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bf140>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.674 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bf140>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.674 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.675 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.675 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.675 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-01-23T12:05:01.674882) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.675 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.675 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f28410be930>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.675 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.675 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410be960>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.675 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410be960>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.676 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.676 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.676 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f28410be750>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.676 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.676 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-01-23T12:05:01.676055) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.676 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f2842f61190>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.676 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f2842f61190>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.677 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.677 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.read.latency volume: 639933059 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.677 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.read.latency volume: 72530295 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.677 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.read.latency volume: 43879093 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.677 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/disk.device.read.latency volume: 327509499 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.677 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/disk.device.read.latency volume: 57556257 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.678 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-01-23T12:05:01.677010) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.678 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/disk.device.read.latency volume: 50069079 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.678 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.678 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f28411a4c50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.678 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.678 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28411c9190>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.678 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28411c9190>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.679 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.679 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.679 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-01-23T12:05:01.679015) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.679 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.679 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.679 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.680 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.680 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.680 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.680 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f28410be990>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.680 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.680 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410be9c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.680 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410be9c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.680 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.681 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.681 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f28410bf1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.681 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.681 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-01-23T12:05:01.680921) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.681 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bf1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.681 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bf1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.681 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.682 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.682 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.682 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-01-23T12:05:01.681889) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.682 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.682 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f28410bebd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.682 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.682 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bec00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.682 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bec00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.683 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.683 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-01-23T12:05:01.683038) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.702 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/memory.usage volume: 48.76171875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.721 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/memory.usage volume: 48.890625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.722 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.722 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f28410bf410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.722 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.722 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bf440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.722 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bf440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.722 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.722 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.722 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.723 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.723 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f28410bec30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.723 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.723 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bec60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.723 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bec60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.723 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-01-23T12:05:01.722595) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.724 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.724 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/network.incoming.bytes volume: 2304 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.724 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-01-23T12:05:01.724032) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.724 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/network.incoming.bytes volume: 1654 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.724 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.724 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f28410bcfb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.725 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.725 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f2842f83560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.725 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f2842f83560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.725 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.725 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/cpu volume: 43550000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.725 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-01-23T12:05:01.725194) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.725 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/cpu volume: 34330000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.726 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.726 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f28410bc920>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.726 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.726 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f28410bc5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.726 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.726 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bc5c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.726 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bc5c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.726 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.726 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/network.incoming.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.727 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/network.incoming.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.727 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.727 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f28410bc890>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.727 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.727 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bc650>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.727 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bc650>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.728 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.728 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.728 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.728 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.729 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f28410be720>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.729 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.729 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410be660>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.729 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-01-23T12:05:01.726746) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.729 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410be660>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.729 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-01-23T12:05:01.728036) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.729 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.729 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.729 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.730 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.730 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-01-23T12:05:01.729441) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.730 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.730 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.730 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.731 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.731 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f28410bc6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.731 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.731 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bc680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.731 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bc680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.731 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.731 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.731 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.732 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.732 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f28410bec90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.732 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.732 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bc6e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.732 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bc6e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.732 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.733 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.733 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-01-23T12:05:01.731585) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.733 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-01-23T12:05:01.732738) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.733 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.733 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.733 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f284322b260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.733 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.734 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f2842f1af60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.734 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f2842f1af60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.734 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.734 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.734 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-01-23T12:05:01.734164) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.734 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.734 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.734 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.735 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.735 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.735 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.735 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f28410bc740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.735 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.736 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bc770>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.736 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bc770>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.736 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.736 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/network.outgoing.bytes volume: 2342 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.736 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/network.outgoing.bytes volume: 2356 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.736 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.736 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f28410be780>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.737 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.737 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-01-23T12:05:01.736171) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.737 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410be7b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.737 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410be7b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.737 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.737 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.737 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-01-23T12:05:01.737388) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.737 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.738 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.738 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.738 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.738 14 DEBUG ceilometer.compute.pollsters [-] e9de5be9-383e-4139-a192-9a00ac9030d0/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.738 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.739 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.739 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.739 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.739 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.739 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.739 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.740 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.740 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.740 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.740 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.740 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.740 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.740 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.740 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.740 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.740 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.741 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.741 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.741 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.741 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.741 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.741 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.741 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.741 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.741 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:05:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:05:01.741 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:05:02 compute-0 nova_compute[185173]: 2026-01-23 12:05:02.812 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:05:03 compute-0 nova_compute[185173]: 2026-01-23 12:05:03.646 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:05:07 compute-0 podman[246438]: 2026-01-23 12:05:07.740501075 +0000 UTC m=+0.072266285 container health_status cde20f10ae383cce1365a41265bac0a75ea71c31a21a1539f187bef9d678e8d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., managed_by=edpm_ansible, name=ubi9-minimal, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, vcs-type=git, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, com.redhat.component=ubi9-minimal-container, config_id=openstack_network_exporter, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 23 12:05:07 compute-0 nova_compute[185173]: 2026-01-23 12:05:07.815 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:05:08 compute-0 nova_compute[185173]: 2026-01-23 12:05:08.648 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:05:12 compute-0 nova_compute[185173]: 2026-01-23 12:05:12.819 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:05:13 compute-0 nova_compute[185173]: 2026-01-23 12:05:13.649 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:05:14 compute-0 podman[246457]: 2026-01-23 12:05:14.753143147 +0000 UTC m=+0.072427139 container health_status 48bfd3e93cfb033a8917f154ab637a84f3f60f7609564292c230ce848bae7693 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 23 12:05:14 compute-0 podman[246459]: 2026-01-23 12:05:14.773874303 +0000 UTC m=+0.074495989 container health_status d96827cd9c29e53bbdf4cef10942608e4ba405294733072b4aa624c0238e2ed8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 12:05:14 compute-0 podman[246458]: 2026-01-23 12:05:14.781537891 +0000 UTC m=+0.089137297 container health_status 6ec039018dddd109dd56b3f3912ce4a80c166b5fb98c417c5e3cfbbdfbfbeaad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20260120)
Jan 23 12:05:17 compute-0 nova_compute[185173]: 2026-01-23 12:05:17.235 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:05:17 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Jan 23 12:05:17 compute-0 nova_compute[185173]: 2026-01-23 12:05:17.820 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:05:18 compute-0 nova_compute[185173]: 2026-01-23 12:05:18.651 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:05:20 compute-0 podman[246518]: 2026-01-23 12:05:20.769987714 +0000 UTC m=+0.100090424 container health_status 1cc877fed4914980324cf4c0d6ba23743fd113442cee4d49cc1a59e402757170 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 23 12:05:22 compute-0 nova_compute[185173]: 2026-01-23 12:05:22.823 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:05:23 compute-0 nova_compute[185173]: 2026-01-23 12:05:23.653 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:05:24 compute-0 nova_compute[185173]: 2026-01-23 12:05:24.236 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:05:24 compute-0 sshd-session[246541]: Invalid user ubuntu from 45.148.10.240 port 33822
Jan 23 12:05:24 compute-0 podman[246544]: 2026-01-23 12:05:24.962414981 +0000 UTC m=+0.103672571 container health_status adf529ba1b6aae11f18bcfacdd7f5850af0b6e6af2250d4a705be9c346f3f5af (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team)
Jan 23 12:05:24 compute-0 podman[246543]: 2026-01-23 12:05:24.964796689 +0000 UTC m=+0.112270801 container health_status 900ef841977ab427bb05b895d10e0cac749b9185cccc7bb7aaf2b3886aa6449a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, distribution-scope=public, io.openshift.tags=base rhel9, architecture=x86_64, container_name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vendor=Red Hat, Inc., version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, maintainer=Red Hat, Inc., managed_by=edpm_ansible, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=kepler, io.openshift.expose-services=, vcs-type=git, name=ubi9)
Jan 23 12:05:25 compute-0 sshd-session[246541]: Connection closed by invalid user ubuntu 45.148.10.240 port 33822 [preauth]
Jan 23 12:05:25 compute-0 nova_compute[185173]: 2026-01-23 12:05:25.230 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:05:26 compute-0 nova_compute[185173]: 2026-01-23 12:05:26.235 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:05:26 compute-0 nova_compute[185173]: 2026-01-23 12:05:26.708 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 12:05:26 compute-0 nova_compute[185173]: 2026-01-23 12:05:26.708 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 12:05:26 compute-0 nova_compute[185173]: 2026-01-23 12:05:26.709 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 12:05:26 compute-0 nova_compute[185173]: 2026-01-23 12:05:26.709 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 23 12:05:26 compute-0 nova_compute[185173]: 2026-01-23 12:05:26.794 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 12:05:26 compute-0 nova_compute[185173]: 2026-01-23 12:05:26.853 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 12:05:26 compute-0 nova_compute[185173]: 2026-01-23 12:05:26.854 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 12:05:26 compute-0 nova_compute[185173]: 2026-01-23 12:05:26.918 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 12:05:26 compute-0 nova_compute[185173]: 2026-01-23 12:05:26.919 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 12:05:26 compute-0 nova_compute[185173]: 2026-01-23 12:05:26.980 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.eph0 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 12:05:26 compute-0 nova_compute[185173]: 2026-01-23 12:05:26.981 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 12:05:27 compute-0 nova_compute[185173]: 2026-01-23 12:05:27.040 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.eph0 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 12:05:27 compute-0 nova_compute[185173]: 2026-01-23 12:05:27.047 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e9de5be9-383e-4139-a192-9a00ac9030d0/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 12:05:27 compute-0 nova_compute[185173]: 2026-01-23 12:05:27.111 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e9de5be9-383e-4139-a192-9a00ac9030d0/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 12:05:27 compute-0 nova_compute[185173]: 2026-01-23 12:05:27.113 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e9de5be9-383e-4139-a192-9a00ac9030d0/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 12:05:27 compute-0 nova_compute[185173]: 2026-01-23 12:05:27.194 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e9de5be9-383e-4139-a192-9a00ac9030d0/disk --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 12:05:27 compute-0 nova_compute[185173]: 2026-01-23 12:05:27.196 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e9de5be9-383e-4139-a192-9a00ac9030d0/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 12:05:27 compute-0 nova_compute[185173]: 2026-01-23 12:05:27.254 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e9de5be9-383e-4139-a192-9a00ac9030d0/disk.eph0 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 12:05:27 compute-0 nova_compute[185173]: 2026-01-23 12:05:27.255 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e9de5be9-383e-4139-a192-9a00ac9030d0/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 12:05:27 compute-0 nova_compute[185173]: 2026-01-23 12:05:27.318 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e9de5be9-383e-4139-a192-9a00ac9030d0/disk.eph0 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 12:05:27 compute-0 nova_compute[185173]: 2026-01-23 12:05:27.644 185177 WARNING nova.virt.libvirt.driver [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 23 12:05:27 compute-0 nova_compute[185173]: 2026-01-23 12:05:27.646 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4888MB free_disk=72.37252426147461GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 23 12:05:27 compute-0 nova_compute[185173]: 2026-01-23 12:05:27.646 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 12:05:27 compute-0 nova_compute[185173]: 2026-01-23 12:05:27.647 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 12:05:27 compute-0 nova_compute[185173]: 2026-01-23 12:05:27.805 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Instance 55846fbf-a87a-4cba-be0b-23125d3d9ef4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 23 12:05:27 compute-0 nova_compute[185173]: 2026-01-23 12:05:27.806 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Instance e9de5be9-383e-4139-a192-9a00ac9030d0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 23 12:05:27 compute-0 nova_compute[185173]: 2026-01-23 12:05:27.806 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 23 12:05:27 compute-0 nova_compute[185173]: 2026-01-23 12:05:27.806 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 23 12:05:27 compute-0 nova_compute[185173]: 2026-01-23 12:05:27.825 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:05:27 compute-0 nova_compute[185173]: 2026-01-23 12:05:27.866 185177 DEBUG nova.compute.provider_tree [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Inventory has not changed in ProviderTree for provider: 77dd020c-2f5c-40b0-b660-8a95a28aabbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 23 12:05:27 compute-0 nova_compute[185173]: 2026-01-23 12:05:27.880 185177 DEBUG nova.scheduler.client.report [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Inventory has not changed for provider 77dd020c-2f5c-40b0-b660-8a95a28aabbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 23 12:05:27 compute-0 nova_compute[185173]: 2026-01-23 12:05:27.881 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 23 12:05:27 compute-0 nova_compute[185173]: 2026-01-23 12:05:27.881 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.234s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 12:05:28 compute-0 nova_compute[185173]: 2026-01-23 12:05:28.656 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:05:28 compute-0 nova_compute[185173]: 2026-01-23 12:05:28.881 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:05:28 compute-0 nova_compute[185173]: 2026-01-23 12:05:28.882 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 23 12:05:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:05:29.121 106832 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 12:05:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:05:29.121 106832 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 12:05:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:05:29.122 106832 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 12:05:29 compute-0 podman[201022]: time="2026-01-23T12:05:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 23 12:05:29 compute-0 podman[201022]: @ - - [23/Jan/2026:12:05:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28508 "" "Go-http-client/1.1"
Jan 23 12:05:29 compute-0 podman[201022]: @ - - [23/Jan/2026:12:05:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4378 "" "Go-http-client/1.1"
Jan 23 12:05:30 compute-0 nova_compute[185173]: 2026-01-23 12:05:30.402 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Acquiring lock "refresh_cache-e9de5be9-383e-4139-a192-9a00ac9030d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 23 12:05:30 compute-0 nova_compute[185173]: 2026-01-23 12:05:30.402 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Acquired lock "refresh_cache-e9de5be9-383e-4139-a192-9a00ac9030d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 23 12:05:30 compute-0 nova_compute[185173]: 2026-01-23 12:05:30.402 185177 DEBUG nova.network.neutron [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] [instance: e9de5be9-383e-4139-a192-9a00ac9030d0] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 23 12:05:30 compute-0 podman[246607]: 2026-01-23 12:05:30.73785848 +0000 UTC m=+0.072587993 container health_status 99ee297e6e25b500e7af118e58bbafc761d2fd7202cdfcf4c976c2a99866b5ef (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 23 12:05:31 compute-0 openstack_network_exporter[204160]: ERROR   12:05:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 23 12:05:31 compute-0 openstack_network_exporter[204160]: 
Jan 23 12:05:31 compute-0 openstack_network_exporter[204160]: ERROR   12:05:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 23 12:05:31 compute-0 openstack_network_exporter[204160]: 
Jan 23 12:05:32 compute-0 nova_compute[185173]: 2026-01-23 12:05:32.827 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:05:33 compute-0 nova_compute[185173]: 2026-01-23 12:05:33.659 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:05:34 compute-0 nova_compute[185173]: 2026-01-23 12:05:34.404 185177 DEBUG nova.network.neutron [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] [instance: e9de5be9-383e-4139-a192-9a00ac9030d0] Updating instance_info_cache with network_info: [{"id": "e0cab06b-811c-4fd7-a9ec-dded37a5bfcf", "address": "fa:16:3e:c3:4d:2b", "network": {"id": "9d2c33ef-0f52-43b5-80dd-899657aece53", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.35", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bd16a0de2f5e4a8480a855ef0e1a3f14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0cab06b-81", "ovs_interfaceid": "e0cab06b-811c-4fd7-a9ec-dded37a5bfcf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 23 12:05:34 compute-0 nova_compute[185173]: 2026-01-23 12:05:34.633 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Releasing lock "refresh_cache-e9de5be9-383e-4139-a192-9a00ac9030d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 23 12:05:34 compute-0 nova_compute[185173]: 2026-01-23 12:05:34.633 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] [instance: e9de5be9-383e-4139-a192-9a00ac9030d0] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 23 12:05:34 compute-0 nova_compute[185173]: 2026-01-23 12:05:34.634 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:05:34 compute-0 nova_compute[185173]: 2026-01-23 12:05:34.634 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:05:34 compute-0 nova_compute[185173]: 2026-01-23 12:05:34.635 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:05:34 compute-0 nova_compute[185173]: 2026-01-23 12:05:34.635 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:05:34 compute-0 nova_compute[185173]: 2026-01-23 12:05:34.635 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 23 12:05:37 compute-0 nova_compute[185173]: 2026-01-23 12:05:37.829 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:05:38 compute-0 nova_compute[185173]: 2026-01-23 12:05:38.662 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:05:38 compute-0 podman[246631]: 2026-01-23 12:05:38.800537761 +0000 UTC m=+0.115334501 container health_status cde20f10ae383cce1365a41265bac0a75ea71c31a21a1539f187bef9d678e8d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, release=1755695350, managed_by=edpm_ansible, name=ubi9-minimal, container_name=openstack_network_exporter, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-type=git, config_id=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=)
Jan 23 12:05:42 compute-0 nova_compute[185173]: 2026-01-23 12:05:42.831 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:05:43 compute-0 nova_compute[185173]: 2026-01-23 12:05:43.664 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:05:45 compute-0 sshd-session[245264]: Received disconnect from 38.102.83.196 port 38810:11: disconnected by user
Jan 23 12:05:45 compute-0 sshd-session[245264]: Disconnected from user zuul 38.102.83.196 port 38810
Jan 23 12:05:45 compute-0 sshd-session[245238]: pam_unix(sshd:session): session closed for user zuul
Jan 23 12:05:45 compute-0 systemd[1]: session-31.scope: Deactivated successfully.
Jan 23 12:05:45 compute-0 systemd[1]: session-31.scope: Consumed 3.801s CPU time.
Jan 23 12:05:45 compute-0 systemd-logind[798]: Session 31 logged out. Waiting for processes to exit.
Jan 23 12:05:45 compute-0 systemd-logind[798]: Removed session 31.
Jan 23 12:05:45 compute-0 podman[246651]: 2026-01-23 12:05:45.52259267 +0000 UTC m=+0.070122681 container health_status 6ec039018dddd109dd56b3f3912ce4a80c166b5fb98c417c5e3cfbbdfbfbeaad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, org.label-schema.build-date=20260120, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute)
Jan 23 12:05:45 compute-0 podman[246650]: 2026-01-23 12:05:45.543737519 +0000 UTC m=+0.092225414 container health_status 48bfd3e93cfb033a8917f154ab637a84f3f60f7609564292c230ce848bae7693 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Jan 23 12:05:45 compute-0 podman[246652]: 2026-01-23 12:05:45.553740755 +0000 UTC m=+0.096584381 container health_status d96827cd9c29e53bbdf4cef10942608e4ba405294733072b4aa624c0238e2ed8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 23 12:05:47 compute-0 nova_compute[185173]: 2026-01-23 12:05:47.835 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:05:48 compute-0 nova_compute[185173]: 2026-01-23 12:05:48.666 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:05:51 compute-0 podman[246711]: 2026-01-23 12:05:51.806862447 +0000 UTC m=+0.130035992 container health_status 1cc877fed4914980324cf4c0d6ba23743fd113442cee4d49cc1a59e402757170 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251202, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 23 12:05:52 compute-0 nova_compute[185173]: 2026-01-23 12:05:52.836 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:05:53 compute-0 nova_compute[185173]: 2026-01-23 12:05:53.668 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:05:55 compute-0 podman[246736]: 2026-01-23 12:05:55.734332538 +0000 UTC m=+0.065356204 container health_status 900ef841977ab427bb05b895d10e0cac749b9185cccc7bb7aaf2b3886aa6449a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, version=9.4, distribution-scope=public, release=1214.1726694543, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.tags=base rhel9, io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release-0.7.12=, vendor=Red Hat, Inc., io.openshift.expose-services=, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=kepler, com.redhat.component=ubi9-container)
Jan 23 12:05:55 compute-0 podman[246737]: 2026-01-23 12:05:55.744654291 +0000 UTC m=+0.070297065 container health_status adf529ba1b6aae11f18bcfacdd7f5850af0b6e6af2250d4a705be9c346f3f5af (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 23 12:05:57 compute-0 nova_compute[185173]: 2026-01-23 12:05:57.837 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:05:58 compute-0 nova_compute[185173]: 2026-01-23 12:05:58.671 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:05:59 compute-0 podman[201022]: time="2026-01-23T12:05:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 23 12:05:59 compute-0 podman[201022]: @ - - [23/Jan/2026:12:05:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28508 "" "Go-http-client/1.1"
Jan 23 12:05:59 compute-0 podman[201022]: @ - - [23/Jan/2026:12:05:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4377 "" "Go-http-client/1.1"
Jan 23 12:06:01 compute-0 openstack_network_exporter[204160]: ERROR   12:06:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 23 12:06:01 compute-0 openstack_network_exporter[204160]: 
Jan 23 12:06:01 compute-0 openstack_network_exporter[204160]: ERROR   12:06:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 23 12:06:01 compute-0 openstack_network_exporter[204160]: 
Jan 23 12:06:01 compute-0 podman[246775]: 2026-01-23 12:06:01.731712947 +0000 UTC m=+0.065953379 container health_status 99ee297e6e25b500e7af118e58bbafc761d2fd7202cdfcf4c976c2a99866b5ef (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 23 12:06:02 compute-0 nova_compute[185173]: 2026-01-23 12:06:02.840 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:06:03 compute-0 nova_compute[185173]: 2026-01-23 12:06:03.673 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:06:07 compute-0 nova_compute[185173]: 2026-01-23 12:06:07.842 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:06:08 compute-0 nova_compute[185173]: 2026-01-23 12:06:08.675 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:06:09 compute-0 podman[246799]: 2026-01-23 12:06:09.729112561 +0000 UTC m=+0.059627215 container health_status cde20f10ae383cce1365a41265bac0a75ea71c31a21a1539f187bef9d678e8d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, config_id=openstack_network_exporter, maintainer=Red Hat, Inc., managed_by=edpm_ansible, version=9.6, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, name=ubi9-minimal, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, com.redhat.component=ubi9-minimal-container)
Jan 23 12:06:12 compute-0 nova_compute[185173]: 2026-01-23 12:06:12.846 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:06:13 compute-0 nova_compute[185173]: 2026-01-23 12:06:13.677 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:06:15 compute-0 podman[246819]: 2026-01-23 12:06:15.728366458 +0000 UTC m=+0.059374688 container health_status 48bfd3e93cfb033a8917f154ab637a84f3f60f7609564292c230ce848bae7693 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 23 12:06:15 compute-0 podman[246820]: 2026-01-23 12:06:15.750559413 +0000 UTC m=+0.076411126 container health_status 6ec039018dddd109dd56b3f3912ce4a80c166b5fb98c417c5e3cfbbdfbfbeaad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.build-date=20260120, org.label-schema.license=GPLv2, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 23 12:06:15 compute-0 podman[246821]: 2026-01-23 12:06:15.775057484 +0000 UTC m=+0.085637173 container health_status d96827cd9c29e53bbdf4cef10942608e4ba405294733072b4aa624c0238e2ed8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, tcib_managed=true)
Jan 23 12:06:17 compute-0 nova_compute[185173]: 2026-01-23 12:06:17.847 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:06:18 compute-0 nova_compute[185173]: 2026-01-23 12:06:18.678 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:06:19 compute-0 nova_compute[185173]: 2026-01-23 12:06:19.235 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:06:22 compute-0 podman[246881]: 2026-01-23 12:06:22.757036228 +0000 UTC m=+0.089477337 container health_status 1cc877fed4914980324cf4c0d6ba23743fd113442cee4d49cc1a59e402757170 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3)
Jan 23 12:06:22 compute-0 nova_compute[185173]: 2026-01-23 12:06:22.849 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:06:23 compute-0 nova_compute[185173]: 2026-01-23 12:06:23.681 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:06:24 compute-0 nova_compute[185173]: 2026-01-23 12:06:24.235 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:06:25 compute-0 nova_compute[185173]: 2026-01-23 12:06:25.230 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:06:26 compute-0 nova_compute[185173]: 2026-01-23 12:06:26.234 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:06:26 compute-0 nova_compute[185173]: 2026-01-23 12:06:26.264 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 12:06:26 compute-0 nova_compute[185173]: 2026-01-23 12:06:26.264 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 12:06:26 compute-0 nova_compute[185173]: 2026-01-23 12:06:26.265 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 12:06:26 compute-0 nova_compute[185173]: 2026-01-23 12:06:26.265 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 23 12:06:26 compute-0 nova_compute[185173]: 2026-01-23 12:06:26.334 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 12:06:26 compute-0 nova_compute[185173]: 2026-01-23 12:06:26.389 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 12:06:26 compute-0 nova_compute[185173]: 2026-01-23 12:06:26.390 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 12:06:26 compute-0 nova_compute[185173]: 2026-01-23 12:06:26.457 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 12:06:26 compute-0 nova_compute[185173]: 2026-01-23 12:06:26.458 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 12:06:26 compute-0 nova_compute[185173]: 2026-01-23 12:06:26.518 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.eph0 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 12:06:26 compute-0 nova_compute[185173]: 2026-01-23 12:06:26.519 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 12:06:26 compute-0 nova_compute[185173]: 2026-01-23 12:06:26.576 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.eph0 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 12:06:26 compute-0 nova_compute[185173]: 2026-01-23 12:06:26.582 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e9de5be9-383e-4139-a192-9a00ac9030d0/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 12:06:26 compute-0 nova_compute[185173]: 2026-01-23 12:06:26.643 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e9de5be9-383e-4139-a192-9a00ac9030d0/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 12:06:26 compute-0 nova_compute[185173]: 2026-01-23 12:06:26.644 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e9de5be9-383e-4139-a192-9a00ac9030d0/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 12:06:26 compute-0 nova_compute[185173]: 2026-01-23 12:06:26.710 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e9de5be9-383e-4139-a192-9a00ac9030d0/disk --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 12:06:26 compute-0 nova_compute[185173]: 2026-01-23 12:06:26.710 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e9de5be9-383e-4139-a192-9a00ac9030d0/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 12:06:26 compute-0 podman[246920]: 2026-01-23 12:06:26.735741407 +0000 UTC m=+0.067826045 container health_status 900ef841977ab427bb05b895d10e0cac749b9185cccc7bb7aaf2b3886aa6449a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, distribution-scope=public, io.openshift.tags=base rhel9, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., version=9.4, com.redhat.component=ubi9-container, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., config_id=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., release-0.7.12=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, io.buildah.version=1.29.0, architecture=x86_64)
Jan 23 12:06:26 compute-0 podman[246922]: 2026-01-23 12:06:26.767226309 +0000 UTC m=+0.095508614 container health_status adf529ba1b6aae11f18bcfacdd7f5850af0b6e6af2250d4a705be9c346f3f5af (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 23 12:06:26 compute-0 nova_compute[185173]: 2026-01-23 12:06:26.770 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e9de5be9-383e-4139-a192-9a00ac9030d0/disk.eph0 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 12:06:26 compute-0 nova_compute[185173]: 2026-01-23 12:06:26.772 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e9de5be9-383e-4139-a192-9a00ac9030d0/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 12:06:26 compute-0 nova_compute[185173]: 2026-01-23 12:06:26.831 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e9de5be9-383e-4139-a192-9a00ac9030d0/disk.eph0 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 12:06:27 compute-0 nova_compute[185173]: 2026-01-23 12:06:27.128 185177 WARNING nova.virt.libvirt.driver [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 23 12:06:27 compute-0 nova_compute[185173]: 2026-01-23 12:06:27.129 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4897MB free_disk=72.37252426147461GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 23 12:06:27 compute-0 nova_compute[185173]: 2026-01-23 12:06:27.130 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 12:06:27 compute-0 nova_compute[185173]: 2026-01-23 12:06:27.130 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 12:06:27 compute-0 nova_compute[185173]: 2026-01-23 12:06:27.200 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Instance 55846fbf-a87a-4cba-be0b-23125d3d9ef4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 23 12:06:27 compute-0 nova_compute[185173]: 2026-01-23 12:06:27.201 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Instance e9de5be9-383e-4139-a192-9a00ac9030d0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 23 12:06:27 compute-0 nova_compute[185173]: 2026-01-23 12:06:27.201 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 23 12:06:27 compute-0 nova_compute[185173]: 2026-01-23 12:06:27.201 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 23 12:06:27 compute-0 nova_compute[185173]: 2026-01-23 12:06:27.253 185177 DEBUG nova.compute.provider_tree [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Inventory has not changed in ProviderTree for provider: 77dd020c-2f5c-40b0-b660-8a95a28aabbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 23 12:06:27 compute-0 nova_compute[185173]: 2026-01-23 12:06:27.266 185177 DEBUG nova.scheduler.client.report [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Inventory has not changed for provider 77dd020c-2f5c-40b0-b660-8a95a28aabbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 23 12:06:27 compute-0 nova_compute[185173]: 2026-01-23 12:06:27.267 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 23 12:06:27 compute-0 nova_compute[185173]: 2026-01-23 12:06:27.268 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.138s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 12:06:27 compute-0 nova_compute[185173]: 2026-01-23 12:06:27.851 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:06:28 compute-0 nova_compute[185173]: 2026-01-23 12:06:28.268 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:06:28 compute-0 nova_compute[185173]: 2026-01-23 12:06:28.269 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 23 12:06:28 compute-0 nova_compute[185173]: 2026-01-23 12:06:28.269 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 23 12:06:28 compute-0 nova_compute[185173]: 2026-01-23 12:06:28.683 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:06:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:06:29.122 106832 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 12:06:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:06:29.122 106832 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 12:06:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:06:29.123 106832 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 12:06:29 compute-0 nova_compute[185173]: 2026-01-23 12:06:29.425 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Acquiring lock "refresh_cache-55846fbf-a87a-4cba-be0b-23125d3d9ef4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 23 12:06:29 compute-0 nova_compute[185173]: 2026-01-23 12:06:29.426 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Acquired lock "refresh_cache-55846fbf-a87a-4cba-be0b-23125d3d9ef4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 23 12:06:29 compute-0 nova_compute[185173]: 2026-01-23 12:06:29.426 185177 DEBUG nova.network.neutron [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] [instance: 55846fbf-a87a-4cba-be0b-23125d3d9ef4] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 23 12:06:29 compute-0 nova_compute[185173]: 2026-01-23 12:06:29.426 185177 DEBUG nova.objects.instance [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 55846fbf-a87a-4cba-be0b-23125d3d9ef4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 23 12:06:29 compute-0 podman[201022]: time="2026-01-23T12:06:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 23 12:06:29 compute-0 podman[201022]: @ - - [23/Jan/2026:12:06:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28508 "" "Go-http-client/1.1"
Jan 23 12:06:29 compute-0 podman[201022]: @ - - [23/Jan/2026:12:06:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4377 "" "Go-http-client/1.1"
Jan 23 12:06:31 compute-0 openstack_network_exporter[204160]: ERROR   12:06:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 23 12:06:31 compute-0 openstack_network_exporter[204160]: 
Jan 23 12:06:31 compute-0 openstack_network_exporter[204160]: ERROR   12:06:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 23 12:06:31 compute-0 openstack_network_exporter[204160]: 
Jan 23 12:06:32 compute-0 nova_compute[185173]: 2026-01-23 12:06:32.648 185177 DEBUG nova.network.neutron [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] [instance: 55846fbf-a87a-4cba-be0b-23125d3d9ef4] Updating instance_info_cache with network_info: [{"id": "4c18896b-ecf0-4d1b-b901-f24edce45c11", "address": "fa:16:3e:e4:21:a1", "network": {"id": "9d2c33ef-0f52-43b5-80dd-899657aece53", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.65", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bd16a0de2f5e4a8480a855ef0e1a3f14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c18896b-ec", "ovs_interfaceid": "4c18896b-ecf0-4d1b-b901-f24edce45c11", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 23 12:06:32 compute-0 nova_compute[185173]: 2026-01-23 12:06:32.672 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Releasing lock "refresh_cache-55846fbf-a87a-4cba-be0b-23125d3d9ef4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 23 12:06:32 compute-0 nova_compute[185173]: 2026-01-23 12:06:32.673 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] [instance: 55846fbf-a87a-4cba-be0b-23125d3d9ef4] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 23 12:06:32 compute-0 nova_compute[185173]: 2026-01-23 12:06:32.674 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:06:32 compute-0 nova_compute[185173]: 2026-01-23 12:06:32.675 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:06:32 compute-0 nova_compute[185173]: 2026-01-23 12:06:32.676 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:06:32 compute-0 nova_compute[185173]: 2026-01-23 12:06:32.677 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:06:32 compute-0 nova_compute[185173]: 2026-01-23 12:06:32.678 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 23 12:06:32 compute-0 podman[246964]: 2026-01-23 12:06:32.764436398 +0000 UTC m=+0.083264954 container health_status 99ee297e6e25b500e7af118e58bbafc761d2fd7202cdfcf4c976c2a99866b5ef (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 23 12:06:32 compute-0 nova_compute[185173]: 2026-01-23 12:06:32.853 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:06:33 compute-0 nova_compute[185173]: 2026-01-23 12:06:33.685 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:06:36 compute-0 nova_compute[185173]: 2026-01-23 12:06:36.641 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:06:37 compute-0 nova_compute[185173]: 2026-01-23 12:06:37.857 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:06:38 compute-0 nova_compute[185173]: 2026-01-23 12:06:38.687 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:06:40 compute-0 podman[246988]: 2026-01-23 12:06:40.768583685 +0000 UTC m=+0.093857394 container health_status cde20f10ae383cce1365a41265bac0a75ea71c31a21a1539f187bef9d678e8d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=openstack_network_exporter, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, io.openshift.expose-services=, architecture=x86_64, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Jan 23 12:06:42 compute-0 nova_compute[185173]: 2026-01-23 12:06:42.859 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:06:43 compute-0 nova_compute[185173]: 2026-01-23 12:06:43.689 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:06:45 compute-0 nova_compute[185173]: 2026-01-23 12:06:45.147 185177 DEBUG oslo_concurrency.lockutils [None req-560af839-1d9e-43ed-8865-3ce8bcbab89b d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Acquiring lock "e9de5be9-383e-4139-a192-9a00ac9030d0" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 12:06:45 compute-0 nova_compute[185173]: 2026-01-23 12:06:45.148 185177 DEBUG oslo_concurrency.lockutils [None req-560af839-1d9e-43ed-8865-3ce8bcbab89b d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Lock "e9de5be9-383e-4139-a192-9a00ac9030d0" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 12:06:45 compute-0 nova_compute[185173]: 2026-01-23 12:06:45.148 185177 DEBUG oslo_concurrency.lockutils [None req-560af839-1d9e-43ed-8865-3ce8bcbab89b d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Acquiring lock "e9de5be9-383e-4139-a192-9a00ac9030d0-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 12:06:45 compute-0 nova_compute[185173]: 2026-01-23 12:06:45.148 185177 DEBUG oslo_concurrency.lockutils [None req-560af839-1d9e-43ed-8865-3ce8bcbab89b d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Lock "e9de5be9-383e-4139-a192-9a00ac9030d0-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 12:06:45 compute-0 nova_compute[185173]: 2026-01-23 12:06:45.149 185177 DEBUG oslo_concurrency.lockutils [None req-560af839-1d9e-43ed-8865-3ce8bcbab89b d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Lock "e9de5be9-383e-4139-a192-9a00ac9030d0-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 12:06:45 compute-0 nova_compute[185173]: 2026-01-23 12:06:45.150 185177 INFO nova.compute.manager [None req-560af839-1d9e-43ed-8865-3ce8bcbab89b d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: e9de5be9-383e-4139-a192-9a00ac9030d0] Terminating instance
Jan 23 12:06:45 compute-0 nova_compute[185173]: 2026-01-23 12:06:45.151 185177 DEBUG nova.compute.manager [None req-560af839-1d9e-43ed-8865-3ce8bcbab89b d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: e9de5be9-383e-4139-a192-9a00ac9030d0] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 23 12:06:45 compute-0 kernel: tape0cab06b-81 (unregistering): left promiscuous mode
Jan 23 12:06:45 compute-0 NetworkManager[56133]: <info>  [1769170005.1893] device (tape0cab06b-81): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 23 12:06:45 compute-0 ovn_controller[97581]: 2026-01-23T12:06:45Z|00058|binding|INFO|Releasing lport e0cab06b-811c-4fd7-a9ec-dded37a5bfcf from this chassis (sb_readonly=0)
Jan 23 12:06:45 compute-0 ovn_controller[97581]: 2026-01-23T12:06:45Z|00059|binding|INFO|Setting lport e0cab06b-811c-4fd7-a9ec-dded37a5bfcf down in Southbound
Jan 23 12:06:45 compute-0 nova_compute[185173]: 2026-01-23 12:06:45.197 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:06:45 compute-0 ovn_controller[97581]: 2026-01-23T12:06:45Z|00060|binding|INFO|Removing iface tape0cab06b-81 ovn-installed in OVS
Jan 23 12:06:45 compute-0 nova_compute[185173]: 2026-01-23 12:06:45.200 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:06:45 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:06:45.210 106832 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c3:4d:2b 192.168.0.35'], port_security=['fa:16:3e:c3:4d:2b 192.168.0.35'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-wvvtbi4gqh4k-b64ilmmiw3co-dxxhdi3z36fs-port-2konbamiqogw', 'neutron:cidrs': '192.168.0.35/24', 'neutron:device_id': 'e9de5be9-383e-4139-a192-9a00ac9030d0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9d2c33ef-0f52-43b5-80dd-899657aece53', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-wvvtbi4gqh4k-b64ilmmiw3co-dxxhdi3z36fs-port-2konbamiqogw', 'neutron:project_id': 'bd16a0de2f5e4a8480a855ef0e1a3f14', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd2fa655b-b17a-4411-ab93-c6585edc77dc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.210', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=488b21ee-cabd-4ebf-9089-c8262ea2e5e6, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fceaba80790>], logical_port=e0cab06b-811c-4fd7-a9ec-dded37a5bfcf) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fceaba80790>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 23 12:06:45 compute-0 nova_compute[185173]: 2026-01-23 12:06:45.210 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:06:45 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:06:45.211 106832 INFO neutron.agent.ovn.metadata.agent [-] Port e0cab06b-811c-4fd7-a9ec-dded37a5bfcf in datapath 9d2c33ef-0f52-43b5-80dd-899657aece53 unbound from our chassis
Jan 23 12:06:45 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:06:45.213 106832 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9d2c33ef-0f52-43b5-80dd-899657aece53
Jan 23 12:06:45 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:06:45.233 238267 DEBUG oslo.privsep.daemon [-] privsep: reply[3e0d12b1-3554-4c68-98dc-a6589ba8250b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 12:06:45 compute-0 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000004.scope: Deactivated successfully.
Jan 23 12:06:45 compute-0 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000004.scope: Consumed 1min 36.976s CPU time.
Jan 23 12:06:45 compute-0 systemd-machined[156550]: Machine qemu-4-instance-00000004 terminated.
Jan 23 12:06:45 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:06:45.260 238300 DEBUG oslo.privsep.daemon [-] privsep: reply[5e197cb3-3f98-444a-a079-dca80215ad7f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 12:06:45 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:06:45.263 238300 DEBUG oslo.privsep.daemon [-] privsep: reply[0b7c28aa-e0e2-43b5-8e7b-701286642433]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 12:06:45 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:06:45.287 238300 DEBUG oslo.privsep.daemon [-] privsep: reply[f4802987-f38b-4a2a-bd0a-f7a1185c64c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 12:06:45 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:06:45.303 238267 DEBUG oslo.privsep.daemon [-] privsep: reply[e2a0937b-f807-4c57-9e1d-84d76450a349]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9d2c33ef-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5b:a6:26'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 16, 'rx_bytes': 658, 'tx_bytes': 864, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 16, 'rx_bytes': 658, 'tx_bytes': 864, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 374776, 'reachable_time': 32160, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 247021, 'error': None, 'target': 'ovnmeta-9d2c33ef-0f52-43b5-80dd-899657aece53', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 12:06:45 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:06:45.316 238267 DEBUG oslo.privsep.daemon [-] privsep: reply[2fede29e-dc32-4bf7-9a90-89d6d3fd025e]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap9d2c33ef-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 374787, 'tstamp': 374787}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 247022, 'error': None, 'target': 'ovnmeta-9d2c33ef-0f52-43b5-80dd-899657aece53', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap9d2c33ef-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 374789, 'tstamp': 374789}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 247022, 'error': None, 'target': 'ovnmeta-9d2c33ef-0f52-43b5-80dd-899657aece53', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 12:06:45 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:06:45.318 106832 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9d2c33ef-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 23 12:06:45 compute-0 nova_compute[185173]: 2026-01-23 12:06:45.320 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:06:45 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:06:45.327 106832 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9d2c33ef-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 23 12:06:45 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:06:45.327 106832 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 23 12:06:45 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:06:45.328 106832 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9d2c33ef-00, col_values=(('external_ids', {'iface-id': 'a3c84d66-2ae2-461a-92f2-b9999c7b469e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 23 12:06:45 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:06:45.328 106832 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 23 12:06:45 compute-0 nova_compute[185173]: 2026-01-23 12:06:45.328 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:06:45 compute-0 nova_compute[185173]: 2026-01-23 12:06:45.374 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:06:45 compute-0 nova_compute[185173]: 2026-01-23 12:06:45.379 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:06:45 compute-0 nova_compute[185173]: 2026-01-23 12:06:45.434 185177 INFO nova.virt.libvirt.driver [-] [instance: e9de5be9-383e-4139-a192-9a00ac9030d0] Instance destroyed successfully.
Jan 23 12:06:45 compute-0 nova_compute[185173]: 2026-01-23 12:06:45.434 185177 DEBUG nova.objects.instance [None req-560af839-1d9e-43ed-8865-3ce8bcbab89b d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Lazy-loading 'resources' on Instance uuid e9de5be9-383e-4139-a192-9a00ac9030d0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 23 12:06:45 compute-0 nova_compute[185173]: 2026-01-23 12:06:45.447 185177 DEBUG nova.virt.libvirt.vif [None req-560af839-1d9e-43ed-8865-3ce8bcbab89b d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-23T11:56:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='vn-i4gqh4k-b64ilmmiw3co-dxxhdi3z36fs-vnf-e3wngllyc55g',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-i4gqh4k-b64ilmmiw3co-dxxhdi3z36fs-vnf-e3wngllyc55g',id=4,image_ref='c5833e41-b4db-454e-8f49-014aa18c7dc5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-23T11:56:32Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='500baa09-1e39-474e-b275-8b2dffe3a65b'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='bd16a0de2f5e4a8480a855ef0e1a3f14',ramdisk_id='',reservation_id='r-ebfyk188',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,admin,reader',image_base_image_ref='c5833e41-b4db-454e-8f49-014aa18c7dc5',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-23T11:56:32Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT04OTE5NDIxNTQ1NzU3NzgzOTQ1PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTg5MTk0MjE1NDU3NTc3ODM5NDU9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09ODkxOTQyMTU0NTc1Nzc4Mzk0NT09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTg5MTk0MjE1NDU3NTc3ODM5NDU9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT04OTE5NDIxNTQ1NzU3NzgzOTQ1PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT04OTE5NDIxNTQ1NzU3NzgzOTQ1PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvK
Jan 23 12:06:45 compute-0 nova_compute[185173]: Cclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09ODkxOTQyMTU0NTc1Nzc4Mzk0NT09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTg5MTk0MjE1NDU3NTc3ODM5NDU9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT04OTE5NDIxNTQ1NzU3NzgzOTQ1PT0tLQo=',user_id='d9858533c2284846a8f0f19a1fb45045',uuid=e9de5be9-383e-4139-a192-9a00ac9030d0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e0cab06b-811c-4fd7-a9ec-dded37a5bfcf", "address": "fa:16:3e:c3:4d:2b", "network": {"id": "9d2c33ef-0f52-43b5-80dd-899657aece53", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.35", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bd16a0de2f5e4a8480a855ef0e1a3f14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0cab06b-81", "ovs_interfaceid": "e0cab06b-811c-4fd7-a9ec-dded37a5bfcf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 23 12:06:45 compute-0 nova_compute[185173]: 2026-01-23 12:06:45.447 185177 DEBUG nova.network.os_vif_util [None req-560af839-1d9e-43ed-8865-3ce8bcbab89b d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Converting VIF {"id": "e0cab06b-811c-4fd7-a9ec-dded37a5bfcf", "address": "fa:16:3e:c3:4d:2b", "network": {"id": "9d2c33ef-0f52-43b5-80dd-899657aece53", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.35", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bd16a0de2f5e4a8480a855ef0e1a3f14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0cab06b-81", "ovs_interfaceid": "e0cab06b-811c-4fd7-a9ec-dded37a5bfcf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 23 12:06:45 compute-0 nova_compute[185173]: 2026-01-23 12:06:45.448 185177 DEBUG nova.network.os_vif_util [None req-560af839-1d9e-43ed-8865-3ce8bcbab89b d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c3:4d:2b,bridge_name='br-int',has_traffic_filtering=True,id=e0cab06b-811c-4fd7-a9ec-dded37a5bfcf,network=Network(9d2c33ef-0f52-43b5-80dd-899657aece53),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tape0cab06b-81') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 23 12:06:45 compute-0 nova_compute[185173]: 2026-01-23 12:06:45.448 185177 DEBUG os_vif [None req-560af839-1d9e-43ed-8865-3ce8bcbab89b d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c3:4d:2b,bridge_name='br-int',has_traffic_filtering=True,id=e0cab06b-811c-4fd7-a9ec-dded37a5bfcf,network=Network(9d2c33ef-0f52-43b5-80dd-899657aece53),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tape0cab06b-81') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 23 12:06:45 compute-0 nova_compute[185173]: 2026-01-23 12:06:45.449 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:06:45 compute-0 nova_compute[185173]: 2026-01-23 12:06:45.449 185177 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape0cab06b-81, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 23 12:06:45 compute-0 nova_compute[185173]: 2026-01-23 12:06:45.451 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:06:45 compute-0 nova_compute[185173]: 2026-01-23 12:06:45.453 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:06:45 compute-0 nova_compute[185173]: 2026-01-23 12:06:45.456 185177 INFO os_vif [None req-560af839-1d9e-43ed-8865-3ce8bcbab89b d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c3:4d:2b,bridge_name='br-int',has_traffic_filtering=True,id=e0cab06b-811c-4fd7-a9ec-dded37a5bfcf,network=Network(9d2c33ef-0f52-43b5-80dd-899657aece53),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tape0cab06b-81')
Jan 23 12:06:45 compute-0 nova_compute[185173]: 2026-01-23 12:06:45.456 185177 INFO nova.virt.libvirt.driver [None req-560af839-1d9e-43ed-8865-3ce8bcbab89b d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: e9de5be9-383e-4139-a192-9a00ac9030d0] Deleting instance files /var/lib/nova/instances/e9de5be9-383e-4139-a192-9a00ac9030d0_del
Jan 23 12:06:45 compute-0 nova_compute[185173]: 2026-01-23 12:06:45.457 185177 INFO nova.virt.libvirt.driver [None req-560af839-1d9e-43ed-8865-3ce8bcbab89b d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: e9de5be9-383e-4139-a192-9a00ac9030d0] Deletion of /var/lib/nova/instances/e9de5be9-383e-4139-a192-9a00ac9030d0_del complete
Jan 23 12:06:45 compute-0 nova_compute[185173]: 2026-01-23 12:06:45.496 185177 DEBUG nova.compute.manager [req-6e6e49a2-30ba-4587-8c81-c30d98b3f1e3 req-ce2e3a10-82e2-4057-b771-1982b8d3b05a e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] [instance: e9de5be9-383e-4139-a192-9a00ac9030d0] Received event network-vif-unplugged-e0cab06b-811c-4fd7-a9ec-dded37a5bfcf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 23 12:06:45 compute-0 nova_compute[185173]: 2026-01-23 12:06:45.497 185177 DEBUG oslo_concurrency.lockutils [req-6e6e49a2-30ba-4587-8c81-c30d98b3f1e3 req-ce2e3a10-82e2-4057-b771-1982b8d3b05a e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] Acquiring lock "e9de5be9-383e-4139-a192-9a00ac9030d0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 12:06:45 compute-0 nova_compute[185173]: 2026-01-23 12:06:45.497 185177 DEBUG oslo_concurrency.lockutils [req-6e6e49a2-30ba-4587-8c81-c30d98b3f1e3 req-ce2e3a10-82e2-4057-b771-1982b8d3b05a e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] Lock "e9de5be9-383e-4139-a192-9a00ac9030d0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 12:06:45 compute-0 nova_compute[185173]: 2026-01-23 12:06:45.497 185177 DEBUG oslo_concurrency.lockutils [req-6e6e49a2-30ba-4587-8c81-c30d98b3f1e3 req-ce2e3a10-82e2-4057-b771-1982b8d3b05a e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] Lock "e9de5be9-383e-4139-a192-9a00ac9030d0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 12:06:45 compute-0 nova_compute[185173]: 2026-01-23 12:06:45.497 185177 DEBUG nova.compute.manager [req-6e6e49a2-30ba-4587-8c81-c30d98b3f1e3 req-ce2e3a10-82e2-4057-b771-1982b8d3b05a e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] [instance: e9de5be9-383e-4139-a192-9a00ac9030d0] No waiting events found dispatching network-vif-unplugged-e0cab06b-811c-4fd7-a9ec-dded37a5bfcf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 23 12:06:45 compute-0 nova_compute[185173]: 2026-01-23 12:06:45.498 185177 DEBUG nova.compute.manager [req-6e6e49a2-30ba-4587-8c81-c30d98b3f1e3 req-ce2e3a10-82e2-4057-b771-1982b8d3b05a e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] [instance: e9de5be9-383e-4139-a192-9a00ac9030d0] Received event network-vif-unplugged-e0cab06b-811c-4fd7-a9ec-dded37a5bfcf for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 23 12:06:45 compute-0 nova_compute[185173]: 2026-01-23 12:06:45.525 185177 INFO nova.compute.manager [None req-560af839-1d9e-43ed-8865-3ce8bcbab89b d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: e9de5be9-383e-4139-a192-9a00ac9030d0] Took 0.37 seconds to destroy the instance on the hypervisor.
Jan 23 12:06:45 compute-0 nova_compute[185173]: 2026-01-23 12:06:45.526 185177 DEBUG oslo.service.loopingcall [None req-560af839-1d9e-43ed-8865-3ce8bcbab89b d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 23 12:06:45 compute-0 nova_compute[185173]: 2026-01-23 12:06:45.526 185177 DEBUG nova.compute.manager [-] [instance: e9de5be9-383e-4139-a192-9a00ac9030d0] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 23 12:06:45 compute-0 nova_compute[185173]: 2026-01-23 12:06:45.526 185177 DEBUG nova.network.neutron [-] [instance: e9de5be9-383e-4139-a192-9a00ac9030d0] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 23 12:06:45 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:06:45.574 106832 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:21:44', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '86:2e:09:c4:2a:53'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 23 12:06:45 compute-0 nova_compute[185173]: 2026-01-23 12:06:45.574 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:06:45 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:06:45.575 106832 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 23 12:06:45 compute-0 rsyslogd[235472]: message too long (8192) with configured size 8096, begin of message is: 2026-01-23 12:06:45.447 185177 DEBUG nova.virt.libvirt.vif [None req-560af839-1d [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 23 12:06:46 compute-0 podman[247044]: 2026-01-23 12:06:46.737958259 +0000 UTC m=+0.064128684 container health_status 48bfd3e93cfb033a8917f154ab637a84f3f60f7609564292c230ce848bae7693 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 23 12:06:46 compute-0 podman[247045]: 2026-01-23 12:06:46.766153701 +0000 UTC m=+0.085197071 container health_status 6ec039018dddd109dd56b3f3912ce4a80c166b5fb98c417c5e3cfbbdfbfbeaad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260120, org.label-schema.license=GPLv2)
Jan 23 12:06:46 compute-0 podman[247046]: 2026-01-23 12:06:46.778443163 +0000 UTC m=+0.094793787 container health_status d96827cd9c29e53bbdf4cef10942608e4ba405294733072b4aa624c0238e2ed8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 23 12:06:47 compute-0 nova_compute[185173]: 2026-01-23 12:06:47.164 185177 DEBUG nova.network.neutron [-] [instance: e9de5be9-383e-4139-a192-9a00ac9030d0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 23 12:06:47 compute-0 nova_compute[185173]: 2026-01-23 12:06:47.181 185177 INFO nova.compute.manager [-] [instance: e9de5be9-383e-4139-a192-9a00ac9030d0] Took 1.65 seconds to deallocate network for instance.
Jan 23 12:06:47 compute-0 nova_compute[185173]: 2026-01-23 12:06:47.214 185177 DEBUG oslo_concurrency.lockutils [None req-560af839-1d9e-43ed-8865-3ce8bcbab89b d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 12:06:47 compute-0 nova_compute[185173]: 2026-01-23 12:06:47.215 185177 DEBUG oslo_concurrency.lockutils [None req-560af839-1d9e-43ed-8865-3ce8bcbab89b d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 12:06:47 compute-0 nova_compute[185173]: 2026-01-23 12:06:47.297 185177 DEBUG nova.compute.provider_tree [None req-560af839-1d9e-43ed-8865-3ce8bcbab89b d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Inventory has not changed in ProviderTree for provider: 77dd020c-2f5c-40b0-b660-8a95a28aabbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 23 12:06:47 compute-0 nova_compute[185173]: 2026-01-23 12:06:47.313 185177 DEBUG nova.scheduler.client.report [None req-560af839-1d9e-43ed-8865-3ce8bcbab89b d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Inventory has not changed for provider 77dd020c-2f5c-40b0-b660-8a95a28aabbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 23 12:06:47 compute-0 nova_compute[185173]: 2026-01-23 12:06:47.339 185177 DEBUG oslo_concurrency.lockutils [None req-560af839-1d9e-43ed-8865-3ce8bcbab89b d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.124s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 12:06:47 compute-0 nova_compute[185173]: 2026-01-23 12:06:47.369 185177 INFO nova.scheduler.client.report [None req-560af839-1d9e-43ed-8865-3ce8bcbab89b d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Deleted allocations for instance e9de5be9-383e-4139-a192-9a00ac9030d0
Jan 23 12:06:47 compute-0 nova_compute[185173]: 2026-01-23 12:06:47.443 185177 DEBUG oslo_concurrency.lockutils [None req-560af839-1d9e-43ed-8865-3ce8bcbab89b d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Lock "e9de5be9-383e-4139-a192-9a00ac9030d0" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.295s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 12:06:47 compute-0 nova_compute[185173]: 2026-01-23 12:06:47.586 185177 DEBUG nova.compute.manager [req-b3d81b51-b8e2-4762-8aef-76d7f77f8a92 req-b1dc957c-8f24-475e-b0d6-87e51a075da7 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] [instance: e9de5be9-383e-4139-a192-9a00ac9030d0] Received event network-vif-plugged-e0cab06b-811c-4fd7-a9ec-dded37a5bfcf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 23 12:06:47 compute-0 nova_compute[185173]: 2026-01-23 12:06:47.586 185177 DEBUG oslo_concurrency.lockutils [req-b3d81b51-b8e2-4762-8aef-76d7f77f8a92 req-b1dc957c-8f24-475e-b0d6-87e51a075da7 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] Acquiring lock "e9de5be9-383e-4139-a192-9a00ac9030d0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 12:06:47 compute-0 nova_compute[185173]: 2026-01-23 12:06:47.587 185177 DEBUG oslo_concurrency.lockutils [req-b3d81b51-b8e2-4762-8aef-76d7f77f8a92 req-b1dc957c-8f24-475e-b0d6-87e51a075da7 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] Lock "e9de5be9-383e-4139-a192-9a00ac9030d0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 12:06:47 compute-0 nova_compute[185173]: 2026-01-23 12:06:47.587 185177 DEBUG oslo_concurrency.lockutils [req-b3d81b51-b8e2-4762-8aef-76d7f77f8a92 req-b1dc957c-8f24-475e-b0d6-87e51a075da7 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] Lock "e9de5be9-383e-4139-a192-9a00ac9030d0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 12:06:47 compute-0 nova_compute[185173]: 2026-01-23 12:06:47.588 185177 DEBUG nova.compute.manager [req-b3d81b51-b8e2-4762-8aef-76d7f77f8a92 req-b1dc957c-8f24-475e-b0d6-87e51a075da7 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] [instance: e9de5be9-383e-4139-a192-9a00ac9030d0] No waiting events found dispatching network-vif-plugged-e0cab06b-811c-4fd7-a9ec-dded37a5bfcf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 23 12:06:47 compute-0 nova_compute[185173]: 2026-01-23 12:06:47.588 185177 WARNING nova.compute.manager [req-b3d81b51-b8e2-4762-8aef-76d7f77f8a92 req-b1dc957c-8f24-475e-b0d6-87e51a075da7 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] [instance: e9de5be9-383e-4139-a192-9a00ac9030d0] Received unexpected event network-vif-plugged-e0cab06b-811c-4fd7-a9ec-dded37a5bfcf for instance with vm_state deleted and task_state None.
Jan 23 12:06:47 compute-0 nova_compute[185173]: 2026-01-23 12:06:47.589 185177 DEBUG nova.compute.manager [req-b3d81b51-b8e2-4762-8aef-76d7f77f8a92 req-b1dc957c-8f24-475e-b0d6-87e51a075da7 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] [instance: e9de5be9-383e-4139-a192-9a00ac9030d0] Received event network-changed-e0cab06b-811c-4fd7-a9ec-dded37a5bfcf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 23 12:06:47 compute-0 nova_compute[185173]: 2026-01-23 12:06:47.589 185177 DEBUG nova.compute.manager [req-b3d81b51-b8e2-4762-8aef-76d7f77f8a92 req-b1dc957c-8f24-475e-b0d6-87e51a075da7 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] [instance: e9de5be9-383e-4139-a192-9a00ac9030d0] Refreshing instance network info cache due to event network-changed-e0cab06b-811c-4fd7-a9ec-dded37a5bfcf. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 23 12:06:47 compute-0 nova_compute[185173]: 2026-01-23 12:06:47.590 185177 DEBUG oslo_concurrency.lockutils [req-b3d81b51-b8e2-4762-8aef-76d7f77f8a92 req-b1dc957c-8f24-475e-b0d6-87e51a075da7 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] Acquiring lock "refresh_cache-e9de5be9-383e-4139-a192-9a00ac9030d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 23 12:06:47 compute-0 nova_compute[185173]: 2026-01-23 12:06:47.590 185177 DEBUG oslo_concurrency.lockutils [req-b3d81b51-b8e2-4762-8aef-76d7f77f8a92 req-b1dc957c-8f24-475e-b0d6-87e51a075da7 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] Acquired lock "refresh_cache-e9de5be9-383e-4139-a192-9a00ac9030d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 23 12:06:47 compute-0 nova_compute[185173]: 2026-01-23 12:06:47.591 185177 DEBUG nova.network.neutron [req-b3d81b51-b8e2-4762-8aef-76d7f77f8a92 req-b1dc957c-8f24-475e-b0d6-87e51a075da7 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] [instance: e9de5be9-383e-4139-a192-9a00ac9030d0] Refreshing network info cache for port e0cab06b-811c-4fd7-a9ec-dded37a5bfcf _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 23 12:06:47 compute-0 nova_compute[185173]: 2026-01-23 12:06:47.701 185177 DEBUG nova.network.neutron [req-b3d81b51-b8e2-4762-8aef-76d7f77f8a92 req-b1dc957c-8f24-475e-b0d6-87e51a075da7 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] [instance: e9de5be9-383e-4139-a192-9a00ac9030d0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 23 12:06:47 compute-0 nova_compute[185173]: 2026-01-23 12:06:47.862 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:06:48 compute-0 nova_compute[185173]: 2026-01-23 12:06:48.278 185177 DEBUG nova.network.neutron [req-b3d81b51-b8e2-4762-8aef-76d7f77f8a92 req-b1dc957c-8f24-475e-b0d6-87e51a075da7 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] [instance: e9de5be9-383e-4139-a192-9a00ac9030d0] Instance is deleted, no further info cache update update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:106
Jan 23 12:06:48 compute-0 nova_compute[185173]: 2026-01-23 12:06:48.278 185177 DEBUG oslo_concurrency.lockutils [req-b3d81b51-b8e2-4762-8aef-76d7f77f8a92 req-b1dc957c-8f24-475e-b0d6-87e51a075da7 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] Releasing lock "refresh_cache-e9de5be9-383e-4139-a192-9a00ac9030d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 23 12:06:50 compute-0 nova_compute[185173]: 2026-01-23 12:06:50.452 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:06:52 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:06:52.578 106832 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9a136bfd-345f-428f-a7f6-d55531120214, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 23 12:06:52 compute-0 nova_compute[185173]: 2026-01-23 12:06:52.865 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:06:53 compute-0 podman[247104]: 2026-01-23 12:06:53.764962772 +0000 UTC m=+0.101109461 container health_status 1cc877fed4914980324cf4c0d6ba23743fd113442cee4d49cc1a59e402757170 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 23 12:06:55 compute-0 nova_compute[185173]: 2026-01-23 12:06:55.455 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:06:57 compute-0 podman[247131]: 2026-01-23 12:06:57.748674534 +0000 UTC m=+0.072731425 container health_status 900ef841977ab427bb05b895d10e0cac749b9185cccc7bb7aaf2b3886aa6449a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.tags=base rhel9, vcs-type=git, io.openshift.expose-services=, build-date=2024-09-18T21:23:30, vendor=Red Hat, Inc., config_id=kepler, maintainer=Red Hat, Inc., container_name=kepler, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, version=9.4, io.buildah.version=1.29.0, com.redhat.component=ubi9-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, managed_by=edpm_ansible, release=1214.1726694543, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, io.k8s.display-name=Red Hat Universal Base Image 9)
Jan 23 12:06:57 compute-0 podman[247132]: 2026-01-23 12:06:57.766418249 +0000 UTC m=+0.086654127 container health_status adf529ba1b6aae11f18bcfacdd7f5850af0b6e6af2250d4a705be9c346f3f5af (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Jan 23 12:06:57 compute-0 nova_compute[185173]: 2026-01-23 12:06:57.867 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:06:59 compute-0 podman[201022]: time="2026-01-23T12:06:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 23 12:06:59 compute-0 podman[201022]: @ - - [23/Jan/2026:12:06:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28508 "" "Go-http-client/1.1"
Jan 23 12:06:59 compute-0 podman[201022]: @ - - [23/Jan/2026:12:06:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4380 "" "Go-http-client/1.1"
Jan 23 12:07:00 compute-0 nova_compute[185173]: 2026-01-23 12:07:00.432 185177 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769170005.4306297, e9de5be9-383e-4139-a192-9a00ac9030d0 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 23 12:07:00 compute-0 nova_compute[185173]: 2026-01-23 12:07:00.432 185177 INFO nova.compute.manager [-] [instance: e9de5be9-383e-4139-a192-9a00ac9030d0] VM Stopped (Lifecycle Event)
Jan 23 12:07:00 compute-0 nova_compute[185173]: 2026-01-23 12:07:00.451 185177 DEBUG nova.compute.manager [None req-9be77eb6-e464-436e-b5d9-ac1a0651ee04 - - - - - -] [instance: e9de5be9-383e-4139-a192-9a00ac9030d0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 23 12:07:00 compute-0 nova_compute[185173]: 2026-01-23 12:07:00.457 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:07:01 compute-0 openstack_network_exporter[204160]: ERROR   12:07:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 23 12:07:01 compute-0 openstack_network_exporter[204160]: 
Jan 23 12:07:01 compute-0 openstack_network_exporter[204160]: ERROR   12:07:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 23 12:07:01 compute-0 openstack_network_exporter[204160]: 
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.457 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.457 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.457 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28433716d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.458 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f28410bc7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.459 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be810>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28433716d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.459 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be840>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28433716d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.459 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28433716d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.459 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be8a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28433716d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.459 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc8f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28433716d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.459 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be900>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28433716d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.460 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bf140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28433716d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.460 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be960>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28433716d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.460 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2842f61190>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28433716d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.460 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28411c9190>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28433716d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.460 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be9c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28433716d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.460 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bf1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28433716d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.461 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bec00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28433716d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.461 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bf440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28433716d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.461 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bec60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28433716d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.461 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2842f83560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28433716d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.461 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28433716d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.461 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc5c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28433716d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.462 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28433716d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.462 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be660>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28433716d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.462 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28433716d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.462 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28433716d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.462 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2842f1af60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28433716d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.462 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28433716d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.462 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be7b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28433716d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.464 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '55846fbf-a87a-4cba-be0b-23125d3d9ef4', 'name': 'test_0', 'flavor': {'id': 'f2c5c5dd-a580-4885-a3ab-a766eac401c8', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'c5833e41-b4db-454e-8f49-014aa18c7dc5'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'bd16a0de2f5e4a8480a855ef0e1a3f14', 'user_id': 'd9858533c2284846a8f0f19a1fb45045', 'hostId': '47f89b8956aaa9163f724166aabd4216eadbb2bd951d24f4c87e1ecb', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.464 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.464 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bc800>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.464 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bc800>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.465 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.466 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-01-23T12:07:01.465007) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.471 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:07:01 compute-0 rsyslogd[235472]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.471 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.472 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f28410be7e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.472 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.472 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410be810>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.472 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410be810>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.472 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.473 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-01-23T12:07:01.472902) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 12:07:01 compute-0 rsyslogd[235472]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.496 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.496 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.496 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.497 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.497 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f28411c9b80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.497 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.497 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410be840>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.497 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410be840>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.497 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.498 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-01-23T12:07:01.497569) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.552 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.553 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.553 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.553 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.554 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f28410bc830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.554 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.554 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f28410be870>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.554 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.554 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410be8a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.554 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410be8a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.554 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.555 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.write.latency volume: 1669208630 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.555 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-01-23T12:07:01.554849) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.555 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.write.latency volume: 8106790 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.555 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.556 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.556 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f28410bc8c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.556 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.556 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bc8f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.556 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bc8f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.556 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.557 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.557 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-01-23T12:07:01.556869) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.557 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.557 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f28410be8d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.558 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.558 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410be900>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.558 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410be900>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.558 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.558 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.write.requests volume: 234 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.558 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.559 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.559 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.559 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f28410bef30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.559 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.559 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bf140>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.560 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bf140>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.560 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-01-23T12:07:01.558602) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.560 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.560 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-01-23T12:07:01.560159) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.560 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.560 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.560 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f28410be930>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.560 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.561 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410be960>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.561 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410be960>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.561 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.561 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.561 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f28410be750>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.561 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-01-23T12:07:01.561158) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.561 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.561 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f2842f61190>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.561 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f2842f61190>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.562 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.562 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.read.latency volume: 639933059 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.562 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.read.latency volume: 72530295 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.562 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.read.latency volume: 43879093 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.562 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-01-23T12:07:01.561962) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.563 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.563 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f28411a4c50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.563 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.563 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28411c9190>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.563 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28411c9190>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.563 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.563 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.563 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.564 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.564 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.564 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f28410be990>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.564 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.564 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-01-23T12:07:01.563665) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.564 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410be9c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.565 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410be9c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.565 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.565 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.565 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f28410bf1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.565 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.565 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bf1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.566 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-01-23T12:07:01.565179) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.566 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bf1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.566 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.566 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.566 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-01-23T12:07:01.566407) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.567 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.567 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f28410bebd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.567 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.567 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bec00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.567 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bec00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.567 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.568 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-01-23T12:07:01.567589) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.585 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/memory.usage volume: 48.76171875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.586 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.586 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f28410bf410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.586 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.586 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bf440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.586 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bf440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.586 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.586 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.586 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.587 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f28410bec30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.587 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.587 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bec60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.587 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bec60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.588 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.588 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-01-23T12:07:01.586576) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.588 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/network.incoming.bytes volume: 2388 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.588 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-01-23T12:07:01.587938) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.588 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.588 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f28410bcfb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.588 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.588 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f2842f83560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.589 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f2842f83560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.589 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.589 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/cpu volume: 44710000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.589 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-01-23T12:07:01.589068) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.589 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.589 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f28410bc920>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.589 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.589 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f28410bc5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.589 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.589 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bc5c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.590 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bc5c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.590 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.590 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/network.incoming.packets volume: 27 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.590 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.590 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f28410bc890>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.590 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.590 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bc650>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.590 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bc650>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.590 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-01-23T12:07:01.590075) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.591 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.591 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.591 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.591 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f28410be720>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.591 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.591 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410be660>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.591 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410be660>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.591 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.592 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.592 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-01-23T12:07:01.591016) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.592 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-01-23T12:07:01.591878) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.592 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.592 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.592 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.592 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f28410bc6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.592 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.593 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bc680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.593 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bc680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.593 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.593 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.593 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.593 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f28410bec90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.593 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.593 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-01-23T12:07:01.593188) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.593 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bc6e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.594 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bc6e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.594 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.594 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.594 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.594 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f284322b260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.594 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.594 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f2842f1af60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.595 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f2842f1af60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.595 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-01-23T12:07:01.594114) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.595 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.595 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.595 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-01-23T12:07:01.595087) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.595 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.595 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.595 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.596 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f28410bc740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.596 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.596 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bc770>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.596 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bc770>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.596 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.596 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/network.outgoing.bytes volume: 2342 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.597 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.597 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f28410be780>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.597 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-01-23T12:07:01.596624) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.597 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.597 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410be7b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.597 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410be7b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.597 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.597 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.598 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.598 14 DEBUG ceilometer.compute.pollsters [-] 55846fbf-a87a-4cba-be0b-23125d3d9ef4/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.598 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.599 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-01-23T12:07:01.597796) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.599 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.599 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.599 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.599 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.599 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.599 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.599 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.600 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.600 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.600 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.600 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.600 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.600 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.600 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.600 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.600 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.600 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.601 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.601 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.601 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.601 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.601 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.601 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.601 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.601 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:07:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:07:01.601 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:07:02 compute-0 nova_compute[185173]: 2026-01-23 12:07:02.868 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:07:03 compute-0 podman[247176]: 2026-01-23 12:07:03.765931881 +0000 UTC m=+0.094145731 container health_status 99ee297e6e25b500e7af118e58bbafc761d2fd7202cdfcf4c976c2a99866b5ef (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 23 12:07:03 compute-0 nova_compute[185173]: 2026-01-23 12:07:03.785 185177 DEBUG oslo_concurrency.lockutils [None req-8c68b9ce-53b6-49fe-9737-93237ec632ca d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Acquiring lock "55846fbf-a87a-4cba-be0b-23125d3d9ef4" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 12:07:03 compute-0 nova_compute[185173]: 2026-01-23 12:07:03.785 185177 DEBUG oslo_concurrency.lockutils [None req-8c68b9ce-53b6-49fe-9737-93237ec632ca d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Lock "55846fbf-a87a-4cba-be0b-23125d3d9ef4" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 12:07:03 compute-0 nova_compute[185173]: 2026-01-23 12:07:03.785 185177 DEBUG oslo_concurrency.lockutils [None req-8c68b9ce-53b6-49fe-9737-93237ec632ca d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Acquiring lock "55846fbf-a87a-4cba-be0b-23125d3d9ef4-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 12:07:03 compute-0 nova_compute[185173]: 2026-01-23 12:07:03.786 185177 DEBUG oslo_concurrency.lockutils [None req-8c68b9ce-53b6-49fe-9737-93237ec632ca d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Lock "55846fbf-a87a-4cba-be0b-23125d3d9ef4-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 12:07:03 compute-0 nova_compute[185173]: 2026-01-23 12:07:03.786 185177 DEBUG oslo_concurrency.lockutils [None req-8c68b9ce-53b6-49fe-9737-93237ec632ca d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Lock "55846fbf-a87a-4cba-be0b-23125d3d9ef4-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 12:07:03 compute-0 nova_compute[185173]: 2026-01-23 12:07:03.787 185177 INFO nova.compute.manager [None req-8c68b9ce-53b6-49fe-9737-93237ec632ca d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: 55846fbf-a87a-4cba-be0b-23125d3d9ef4] Terminating instance
Jan 23 12:07:03 compute-0 nova_compute[185173]: 2026-01-23 12:07:03.788 185177 DEBUG nova.compute.manager [None req-8c68b9ce-53b6-49fe-9737-93237ec632ca d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: 55846fbf-a87a-4cba-be0b-23125d3d9ef4] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 23 12:07:03 compute-0 kernel: tap4c18896b-ec (unregistering): left promiscuous mode
Jan 23 12:07:03 compute-0 NetworkManager[56133]: <info>  [1769170023.8285] device (tap4c18896b-ec): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 23 12:07:03 compute-0 ovn_controller[97581]: 2026-01-23T12:07:03Z|00061|binding|INFO|Releasing lport 4c18896b-ecf0-4d1b-b901-f24edce45c11 from this chassis (sb_readonly=0)
Jan 23 12:07:03 compute-0 nova_compute[185173]: 2026-01-23 12:07:03.837 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:07:03 compute-0 ovn_controller[97581]: 2026-01-23T12:07:03Z|00062|binding|INFO|Setting lport 4c18896b-ecf0-4d1b-b901-f24edce45c11 down in Southbound
Jan 23 12:07:03 compute-0 ovn_controller[97581]: 2026-01-23T12:07:03Z|00063|binding|INFO|Removing iface tap4c18896b-ec ovn-installed in OVS
Jan 23 12:07:03 compute-0 nova_compute[185173]: 2026-01-23 12:07:03.839 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:07:03 compute-0 nova_compute[185173]: 2026-01-23 12:07:03.857 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:07:03 compute-0 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Deactivated successfully.
Jan 23 12:07:03 compute-0 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Consumed 2min 48.065s CPU time.
Jan 23 12:07:03 compute-0 systemd-machined[156550]: Machine qemu-1-instance-00000001 terminated.
Jan 23 12:07:03 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:07:03.994 106832 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e4:21:a1 192.168.0.65'], port_security=['fa:16:3e:e4:21:a1 192.168.0.65'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '192.168.0.65/24', 'neutron:device_id': '55846fbf-a87a-4cba-be0b-23125d3d9ef4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9d2c33ef-0f52-43b5-80dd-899657aece53', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bd16a0de2f5e4a8480a855ef0e1a3f14', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd2fa655b-b17a-4411-ab93-c6585edc77dc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.190'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=488b21ee-cabd-4ebf-9089-c8262ea2e5e6, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fceaba80790>], logical_port=4c18896b-ecf0-4d1b-b901-f24edce45c11) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fceaba80790>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 23 12:07:03 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:07:03.995 106832 INFO neutron.agent.ovn.metadata.agent [-] Port 4c18896b-ecf0-4d1b-b901-f24edce45c11 in datapath 9d2c33ef-0f52-43b5-80dd-899657aece53 unbound from our chassis
Jan 23 12:07:03 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:07:03.996 106832 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9d2c33ef-0f52-43b5-80dd-899657aece53, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 23 12:07:03 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:07:03.997 238267 DEBUG oslo.privsep.daemon [-] privsep: reply[73f82502-64fb-402d-9933-4bf1f8d721cc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 12:07:03 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:07:03.998 106832 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9d2c33ef-0f52-43b5-80dd-899657aece53 namespace which is not needed anymore
Jan 23 12:07:04 compute-0 nova_compute[185173]: 2026-01-23 12:07:04.008 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:07:04 compute-0 nova_compute[185173]: 2026-01-23 12:07:04.013 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:07:04 compute-0 nova_compute[185173]: 2026-01-23 12:07:04.070 185177 INFO nova.virt.libvirt.driver [-] [instance: 55846fbf-a87a-4cba-be0b-23125d3d9ef4] Instance destroyed successfully.
Jan 23 12:07:04 compute-0 nova_compute[185173]: 2026-01-23 12:07:04.071 185177 DEBUG nova.objects.instance [None req-8c68b9ce-53b6-49fe-9737-93237ec632ca d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Lazy-loading 'resources' on Instance uuid 55846fbf-a87a-4cba-be0b-23125d3d9ef4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 23 12:07:04 compute-0 nova_compute[185173]: 2026-01-23 12:07:04.085 185177 DEBUG nova.virt.libvirt.vif [None req-8c68b9ce-53b6-49fe-9737-93237ec632ca d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-23T11:46:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='test_0',display_name='test_0',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='test-0',id=1,image_ref='c5833e41-b4db-454e-8f49-014aa18c7dc5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-23T11:47:04Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='bd16a0de2f5e4a8480a855ef0e1a3f14',ramdisk_id='',reservation_id='r-wixocgu2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,admin,reader',image_base_image_ref='c5833e41-b4db-454e-8f49-014aa18c7dc5',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-23T11:47:04Z,user_data=None,user_id='d9858533c2284846a8f0f19a1fb45045',uuid=55846fbf-a87a-4cba-be0b-23125d3d9ef4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4c18896b-ecf0-4d1b-b901-f24edce45c11", "address": "fa:16:3e:e4:21:a1", "network": {"id": "9d2c33ef-0f52-43b5-80dd-899657aece53", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.65", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bd16a0de2f5e4a8480a855ef0e1a3f14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c18896b-ec", "ovs_interfaceid": "4c18896b-ecf0-4d1b-b901-f24edce45c11", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 23 12:07:04 compute-0 nova_compute[185173]: 2026-01-23 12:07:04.086 185177 DEBUG nova.network.os_vif_util [None req-8c68b9ce-53b6-49fe-9737-93237ec632ca d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Converting VIF {"id": "4c18896b-ecf0-4d1b-b901-f24edce45c11", "address": "fa:16:3e:e4:21:a1", "network": {"id": "9d2c33ef-0f52-43b5-80dd-899657aece53", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.65", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bd16a0de2f5e4a8480a855ef0e1a3f14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c18896b-ec", "ovs_interfaceid": "4c18896b-ecf0-4d1b-b901-f24edce45c11", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 23 12:07:04 compute-0 nova_compute[185173]: 2026-01-23 12:07:04.086 185177 DEBUG nova.network.os_vif_util [None req-8c68b9ce-53b6-49fe-9737-93237ec632ca d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e4:21:a1,bridge_name='br-int',has_traffic_filtering=True,id=4c18896b-ecf0-4d1b-b901-f24edce45c11,network=Network(9d2c33ef-0f52-43b5-80dd-899657aece53),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4c18896b-ec') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 23 12:07:04 compute-0 nova_compute[185173]: 2026-01-23 12:07:04.087 185177 DEBUG os_vif [None req-8c68b9ce-53b6-49fe-9737-93237ec632ca d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e4:21:a1,bridge_name='br-int',has_traffic_filtering=True,id=4c18896b-ecf0-4d1b-b901-f24edce45c11,network=Network(9d2c33ef-0f52-43b5-80dd-899657aece53),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4c18896b-ec') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 23 12:07:04 compute-0 nova_compute[185173]: 2026-01-23 12:07:04.088 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:07:04 compute-0 nova_compute[185173]: 2026-01-23 12:07:04.089 185177 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4c18896b-ec, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 23 12:07:04 compute-0 nova_compute[185173]: 2026-01-23 12:07:04.090 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:07:04 compute-0 nova_compute[185173]: 2026-01-23 12:07:04.092 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:07:04 compute-0 nova_compute[185173]: 2026-01-23 12:07:04.095 185177 INFO os_vif [None req-8c68b9ce-53b6-49fe-9737-93237ec632ca d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e4:21:a1,bridge_name='br-int',has_traffic_filtering=True,id=4c18896b-ecf0-4d1b-b901-f24edce45c11,network=Network(9d2c33ef-0f52-43b5-80dd-899657aece53),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4c18896b-ec')
Jan 23 12:07:04 compute-0 nova_compute[185173]: 2026-01-23 12:07:04.095 185177 INFO nova.virt.libvirt.driver [None req-8c68b9ce-53b6-49fe-9737-93237ec632ca d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: 55846fbf-a87a-4cba-be0b-23125d3d9ef4] Deleting instance files /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4_del
Jan 23 12:07:04 compute-0 nova_compute[185173]: 2026-01-23 12:07:04.096 185177 INFO nova.virt.libvirt.driver [None req-8c68b9ce-53b6-49fe-9737-93237ec632ca d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: 55846fbf-a87a-4cba-be0b-23125d3d9ef4] Deletion of /var/lib/nova/instances/55846fbf-a87a-4cba-be0b-23125d3d9ef4_del complete
Jan 23 12:07:04 compute-0 neutron-haproxy-ovnmeta-9d2c33ef-0f52-43b5-80dd-899657aece53[238378]: [NOTICE]   (238382) : haproxy version is 2.8.14-c23fe91
Jan 23 12:07:04 compute-0 neutron-haproxy-ovnmeta-9d2c33ef-0f52-43b5-80dd-899657aece53[238378]: [NOTICE]   (238382) : path to executable is /usr/sbin/haproxy
Jan 23 12:07:04 compute-0 neutron-haproxy-ovnmeta-9d2c33ef-0f52-43b5-80dd-899657aece53[238378]: [WARNING]  (238382) : Exiting Master process...
Jan 23 12:07:04 compute-0 neutron-haproxy-ovnmeta-9d2c33ef-0f52-43b5-80dd-899657aece53[238378]: [WARNING]  (238382) : Exiting Master process...
Jan 23 12:07:04 compute-0 neutron-haproxy-ovnmeta-9d2c33ef-0f52-43b5-80dd-899657aece53[238378]: [ALERT]    (238382) : Current worker (238384) exited with code 143 (Terminated)
Jan 23 12:07:04 compute-0 neutron-haproxy-ovnmeta-9d2c33ef-0f52-43b5-80dd-899657aece53[238378]: [WARNING]  (238382) : All workers exited. Exiting... (0)
Jan 23 12:07:04 compute-0 systemd[1]: libpod-f45044ab43e35afc597abcab4e8e0b9c6c8e2f5afeb70e3fe035697a43e34f1e.scope: Deactivated successfully.
Jan 23 12:07:04 compute-0 podman[247244]: 2026-01-23 12:07:04.159937178 +0000 UTC m=+0.052680954 container died f45044ab43e35afc597abcab4e8e0b9c6c8e2f5afeb70e3fe035697a43e34f1e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9d2c33ef-0f52-43b5-80dd-899657aece53, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 23 12:07:04 compute-0 nova_compute[185173]: 2026-01-23 12:07:04.203 185177 INFO nova.compute.manager [None req-8c68b9ce-53b6-49fe-9737-93237ec632ca d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] [instance: 55846fbf-a87a-4cba-be0b-23125d3d9ef4] Took 0.42 seconds to destroy the instance on the hypervisor.
Jan 23 12:07:04 compute-0 nova_compute[185173]: 2026-01-23 12:07:04.204 185177 DEBUG oslo.service.loopingcall [None req-8c68b9ce-53b6-49fe-9737-93237ec632ca d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 23 12:07:04 compute-0 nova_compute[185173]: 2026-01-23 12:07:04.204 185177 DEBUG nova.compute.manager [-] [instance: 55846fbf-a87a-4cba-be0b-23125d3d9ef4] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 23 12:07:04 compute-0 nova_compute[185173]: 2026-01-23 12:07:04.204 185177 DEBUG nova.network.neutron [-] [instance: 55846fbf-a87a-4cba-be0b-23125d3d9ef4] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 23 12:07:04 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f45044ab43e35afc597abcab4e8e0b9c6c8e2f5afeb70e3fe035697a43e34f1e-userdata-shm.mount: Deactivated successfully.
Jan 23 12:07:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-77f4907b936c862ad4285c3a038757626f8e168a451c7171f9c4b7d30cd8c181-merged.mount: Deactivated successfully.
Jan 23 12:07:04 compute-0 podman[247244]: 2026-01-23 12:07:04.448351445 +0000 UTC m=+0.341095201 container cleanup f45044ab43e35afc597abcab4e8e0b9c6c8e2f5afeb70e3fe035697a43e34f1e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9d2c33ef-0f52-43b5-80dd-899657aece53, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 23 12:07:04 compute-0 systemd[1]: libpod-conmon-f45044ab43e35afc597abcab4e8e0b9c6c8e2f5afeb70e3fe035697a43e34f1e.scope: Deactivated successfully.
Jan 23 12:07:04 compute-0 podman[247272]: 2026-01-23 12:07:04.637168268 +0000 UTC m=+0.165187225 container remove f45044ab43e35afc597abcab4e8e0b9c6c8e2f5afeb70e3fe035697a43e34f1e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9d2c33ef-0f52-43b5-80dd-899657aece53, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 12:07:04 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:07:04.653 238267 DEBUG oslo.privsep.daemon [-] privsep: reply[0658792e-f1f3-465e-aaa1-bfa995c234bc]: (4, ('Fri Jan 23 12:07:04 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-9d2c33ef-0f52-43b5-80dd-899657aece53 (f45044ab43e35afc597abcab4e8e0b9c6c8e2f5afeb70e3fe035697a43e34f1e)\nf45044ab43e35afc597abcab4e8e0b9c6c8e2f5afeb70e3fe035697a43e34f1e\nFri Jan 23 12:07:04 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-9d2c33ef-0f52-43b5-80dd-899657aece53 (f45044ab43e35afc597abcab4e8e0b9c6c8e2f5afeb70e3fe035697a43e34f1e)\nf45044ab43e35afc597abcab4e8e0b9c6c8e2f5afeb70e3fe035697a43e34f1e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 12:07:04 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:07:04.655 238267 DEBUG oslo.privsep.daemon [-] privsep: reply[a675b4f5-c843-46ee-b366-df65b6fbe927]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 12:07:04 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:07:04.656 106832 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9d2c33ef-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 23 12:07:04 compute-0 nova_compute[185173]: 2026-01-23 12:07:04.658 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:07:04 compute-0 kernel: tap9d2c33ef-00: left promiscuous mode
Jan 23 12:07:04 compute-0 nova_compute[185173]: 2026-01-23 12:07:04.666 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:07:04 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:07:04.668 238267 DEBUG oslo.privsep.daemon [-] privsep: reply[6a14a0c8-fad0-4b4e-aa44-a8631e5e9e3a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 12:07:04 compute-0 nova_compute[185173]: 2026-01-23 12:07:04.675 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:07:04 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:07:04.682 238267 DEBUG oslo.privsep.daemon [-] privsep: reply[2272bd22-b3e3-4a46-8f60-e508c09538e4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 12:07:04 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:07:04.683 238267 DEBUG oslo.privsep.daemon [-] privsep: reply[709f8ab6-ff2d-4bed-a526-37a740d975a7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 12:07:04 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:07:04.696 238267 DEBUG oslo.privsep.daemon [-] privsep: reply[95a173ce-6ef2-47ff-83f3-a02db8401b24]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 374764, 'reachable_time': 21128, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 247285, 'error': None, 'target': 'ovnmeta-9d2c33ef-0f52-43b5-80dd-899657aece53', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 12:07:04 compute-0 systemd[1]: run-netns-ovnmeta\x2d9d2c33ef\x2d0f52\x2d43b5\x2d80dd\x2d899657aece53.mount: Deactivated successfully.
Jan 23 12:07:04 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:07:04.710 107372 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9d2c33ef-0f52-43b5-80dd-899657aece53 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 23 12:07:04 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:07:04.711 107372 DEBUG oslo.privsep.daemon [-] privsep: reply[9e4a5ce7-0398-4717-9b81-ff255cf97302]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 12:07:05 compute-0 nova_compute[185173]: 2026-01-23 12:07:05.771 185177 DEBUG nova.compute.manager [req-b5ada762-3766-465f-9758-35da25677ff5 req-9529bd40-4775-44b3-8206-0b9ebd888a68 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] [instance: 55846fbf-a87a-4cba-be0b-23125d3d9ef4] Received event network-vif-unplugged-4c18896b-ecf0-4d1b-b901-f24edce45c11 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 23 12:07:05 compute-0 nova_compute[185173]: 2026-01-23 12:07:05.771 185177 DEBUG oslo_concurrency.lockutils [req-b5ada762-3766-465f-9758-35da25677ff5 req-9529bd40-4775-44b3-8206-0b9ebd888a68 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] Acquiring lock "55846fbf-a87a-4cba-be0b-23125d3d9ef4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 12:07:05 compute-0 nova_compute[185173]: 2026-01-23 12:07:05.772 185177 DEBUG oslo_concurrency.lockutils [req-b5ada762-3766-465f-9758-35da25677ff5 req-9529bd40-4775-44b3-8206-0b9ebd888a68 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] Lock "55846fbf-a87a-4cba-be0b-23125d3d9ef4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 12:07:05 compute-0 nova_compute[185173]: 2026-01-23 12:07:05.772 185177 DEBUG oslo_concurrency.lockutils [req-b5ada762-3766-465f-9758-35da25677ff5 req-9529bd40-4775-44b3-8206-0b9ebd888a68 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] Lock "55846fbf-a87a-4cba-be0b-23125d3d9ef4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 12:07:05 compute-0 nova_compute[185173]: 2026-01-23 12:07:05.772 185177 DEBUG nova.compute.manager [req-b5ada762-3766-465f-9758-35da25677ff5 req-9529bd40-4775-44b3-8206-0b9ebd888a68 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] [instance: 55846fbf-a87a-4cba-be0b-23125d3d9ef4] No waiting events found dispatching network-vif-unplugged-4c18896b-ecf0-4d1b-b901-f24edce45c11 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 23 12:07:05 compute-0 nova_compute[185173]: 2026-01-23 12:07:05.773 185177 DEBUG nova.compute.manager [req-b5ada762-3766-465f-9758-35da25677ff5 req-9529bd40-4775-44b3-8206-0b9ebd888a68 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] [instance: 55846fbf-a87a-4cba-be0b-23125d3d9ef4] Received event network-vif-unplugged-4c18896b-ecf0-4d1b-b901-f24edce45c11 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 23 12:07:07 compute-0 nova_compute[185173]: 2026-01-23 12:07:07.374 185177 DEBUG nova.network.neutron [-] [instance: 55846fbf-a87a-4cba-be0b-23125d3d9ef4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 23 12:07:07 compute-0 nova_compute[185173]: 2026-01-23 12:07:07.393 185177 INFO nova.compute.manager [-] [instance: 55846fbf-a87a-4cba-be0b-23125d3d9ef4] Took 3.19 seconds to deallocate network for instance.
Jan 23 12:07:07 compute-0 nova_compute[185173]: 2026-01-23 12:07:07.450 185177 DEBUG oslo_concurrency.lockutils [None req-8c68b9ce-53b6-49fe-9737-93237ec632ca d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 12:07:07 compute-0 nova_compute[185173]: 2026-01-23 12:07:07.451 185177 DEBUG oslo_concurrency.lockutils [None req-8c68b9ce-53b6-49fe-9737-93237ec632ca d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 12:07:07 compute-0 nova_compute[185173]: 2026-01-23 12:07:07.522 185177 DEBUG nova.compute.manager [req-1358a0da-c606-4f79-9c1b-0f97b41c8cf2 req-3eb4d596-e8c7-4a54-ab23-f974ee58e6e6 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] [instance: 55846fbf-a87a-4cba-be0b-23125d3d9ef4] Received event network-vif-deleted-4c18896b-ecf0-4d1b-b901-f24edce45c11 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 23 12:07:07 compute-0 nova_compute[185173]: 2026-01-23 12:07:07.533 185177 DEBUG nova.compute.provider_tree [None req-8c68b9ce-53b6-49fe-9737-93237ec632ca d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Inventory has not changed in ProviderTree for provider: 77dd020c-2f5c-40b0-b660-8a95a28aabbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 23 12:07:07 compute-0 nova_compute[185173]: 2026-01-23 12:07:07.553 185177 DEBUG nova.scheduler.client.report [None req-8c68b9ce-53b6-49fe-9737-93237ec632ca d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Inventory has not changed for provider 77dd020c-2f5c-40b0-b660-8a95a28aabbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 23 12:07:07 compute-0 nova_compute[185173]: 2026-01-23 12:07:07.589 185177 DEBUG oslo_concurrency.lockutils [None req-8c68b9ce-53b6-49fe-9737-93237ec632ca d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.138s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 12:07:07 compute-0 nova_compute[185173]: 2026-01-23 12:07:07.619 185177 INFO nova.scheduler.client.report [None req-8c68b9ce-53b6-49fe-9737-93237ec632ca d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Deleted allocations for instance 55846fbf-a87a-4cba-be0b-23125d3d9ef4
Jan 23 12:07:07 compute-0 nova_compute[185173]: 2026-01-23 12:07:07.698 185177 DEBUG oslo_concurrency.lockutils [None req-8c68b9ce-53b6-49fe-9737-93237ec632ca d9858533c2284846a8f0f19a1fb45045 bd16a0de2f5e4a8480a855ef0e1a3f14 - - default default] Lock "55846fbf-a87a-4cba-be0b-23125d3d9ef4" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.913s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 12:07:07 compute-0 nova_compute[185173]: 2026-01-23 12:07:07.849 185177 DEBUG nova.compute.manager [req-dc87005a-a5b7-4c4e-b4ce-c7737aa5b1d4 req-14c8e642-b046-4781-b630-66eb99070e1b e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] [instance: 55846fbf-a87a-4cba-be0b-23125d3d9ef4] Received event network-vif-plugged-4c18896b-ecf0-4d1b-b901-f24edce45c11 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 23 12:07:07 compute-0 nova_compute[185173]: 2026-01-23 12:07:07.850 185177 DEBUG oslo_concurrency.lockutils [req-dc87005a-a5b7-4c4e-b4ce-c7737aa5b1d4 req-14c8e642-b046-4781-b630-66eb99070e1b e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] Acquiring lock "55846fbf-a87a-4cba-be0b-23125d3d9ef4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 12:07:07 compute-0 nova_compute[185173]: 2026-01-23 12:07:07.850 185177 DEBUG oslo_concurrency.lockutils [req-dc87005a-a5b7-4c4e-b4ce-c7737aa5b1d4 req-14c8e642-b046-4781-b630-66eb99070e1b e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] Lock "55846fbf-a87a-4cba-be0b-23125d3d9ef4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 12:07:07 compute-0 nova_compute[185173]: 2026-01-23 12:07:07.851 185177 DEBUG oslo_concurrency.lockutils [req-dc87005a-a5b7-4c4e-b4ce-c7737aa5b1d4 req-14c8e642-b046-4781-b630-66eb99070e1b e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] Lock "55846fbf-a87a-4cba-be0b-23125d3d9ef4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 12:07:07 compute-0 nova_compute[185173]: 2026-01-23 12:07:07.851 185177 DEBUG nova.compute.manager [req-dc87005a-a5b7-4c4e-b4ce-c7737aa5b1d4 req-14c8e642-b046-4781-b630-66eb99070e1b e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] [instance: 55846fbf-a87a-4cba-be0b-23125d3d9ef4] No waiting events found dispatching network-vif-plugged-4c18896b-ecf0-4d1b-b901-f24edce45c11 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 23 12:07:07 compute-0 nova_compute[185173]: 2026-01-23 12:07:07.852 185177 WARNING nova.compute.manager [req-dc87005a-a5b7-4c4e-b4ce-c7737aa5b1d4 req-14c8e642-b046-4781-b630-66eb99070e1b e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] [instance: 55846fbf-a87a-4cba-be0b-23125d3d9ef4] Received unexpected event network-vif-plugged-4c18896b-ecf0-4d1b-b901-f24edce45c11 for instance with vm_state deleted and task_state None.
Jan 23 12:07:07 compute-0 nova_compute[185173]: 2026-01-23 12:07:07.870 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:07:09 compute-0 nova_compute[185173]: 2026-01-23 12:07:09.092 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:07:11 compute-0 podman[247288]: 2026-01-23 12:07:11.740589087 +0000 UTC m=+0.071728951 container health_status cde20f10ae383cce1365a41265bac0a75ea71c31a21a1539f187bef9d678e8d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, config_id=openstack_network_exporter, container_name=openstack_network_exporter, distribution-scope=public, io.openshift.expose-services=, vcs-type=git, com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, name=ubi9-minimal, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Jan 23 12:07:12 compute-0 nova_compute[185173]: 2026-01-23 12:07:12.872 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:07:14 compute-0 nova_compute[185173]: 2026-01-23 12:07:14.095 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:07:17 compute-0 podman[247309]: 2026-01-23 12:07:17.733222072 +0000 UTC m=+0.068453751 container health_status 48bfd3e93cfb033a8917f154ab637a84f3f60f7609564292c230ce848bae7693 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 23 12:07:17 compute-0 podman[247311]: 2026-01-23 12:07:17.756010191 +0000 UTC m=+0.070189933 container health_status d96827cd9c29e53bbdf4cef10942608e4ba405294733072b4aa624c0238e2ed8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 23 12:07:17 compute-0 podman[247310]: 2026-01-23 12:07:17.757823976 +0000 UTC m=+0.085019528 container health_status 6ec039018dddd109dd56b3f3912ce4a80c166b5fb98c417c5e3cfbbdfbfbeaad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260120, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 23 12:07:17 compute-0 nova_compute[185173]: 2026-01-23 12:07:17.874 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:07:19 compute-0 nova_compute[185173]: 2026-01-23 12:07:19.069 185177 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769170024.06837, 55846fbf-a87a-4cba-be0b-23125d3d9ef4 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 23 12:07:19 compute-0 nova_compute[185173]: 2026-01-23 12:07:19.069 185177 INFO nova.compute.manager [-] [instance: 55846fbf-a87a-4cba-be0b-23125d3d9ef4] VM Stopped (Lifecycle Event)
Jan 23 12:07:19 compute-0 nova_compute[185173]: 2026-01-23 12:07:19.098 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:07:19 compute-0 nova_compute[185173]: 2026-01-23 12:07:19.255 185177 DEBUG nova.compute.manager [None req-fe0de525-342b-499d-b7a9-2e50cd8977d2 - - - - - -] [instance: 55846fbf-a87a-4cba-be0b-23125d3d9ef4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 23 12:07:21 compute-0 nova_compute[185173]: 2026-01-23 12:07:21.235 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:07:22 compute-0 nova_compute[185173]: 2026-01-23 12:07:22.877 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:07:24 compute-0 nova_compute[185173]: 2026-01-23 12:07:24.100 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:07:24 compute-0 podman[247368]: 2026-01-23 12:07:24.76691443 +0000 UTC m=+0.096139100 container health_status 1cc877fed4914980324cf4c0d6ba23743fd113442cee4d49cc1a59e402757170 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 23 12:07:25 compute-0 nova_compute[185173]: 2026-01-23 12:07:25.230 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:07:25 compute-0 nova_compute[185173]: 2026-01-23 12:07:25.234 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:07:26 compute-0 nova_compute[185173]: 2026-01-23 12:07:26.235 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:07:26 compute-0 nova_compute[185173]: 2026-01-23 12:07:26.272 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 12:07:26 compute-0 nova_compute[185173]: 2026-01-23 12:07:26.273 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 12:07:26 compute-0 nova_compute[185173]: 2026-01-23 12:07:26.273 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 12:07:26 compute-0 nova_compute[185173]: 2026-01-23 12:07:26.273 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 23 12:07:26 compute-0 nova_compute[185173]: 2026-01-23 12:07:26.575 185177 WARNING nova.virt.libvirt.driver [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 23 12:07:26 compute-0 nova_compute[185173]: 2026-01-23 12:07:26.576 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5389MB free_disk=72.41660690307617GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 23 12:07:26 compute-0 nova_compute[185173]: 2026-01-23 12:07:26.577 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 12:07:26 compute-0 nova_compute[185173]: 2026-01-23 12:07:26.577 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 12:07:26 compute-0 nova_compute[185173]: 2026-01-23 12:07:26.643 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 23 12:07:26 compute-0 nova_compute[185173]: 2026-01-23 12:07:26.644 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 23 12:07:26 compute-0 nova_compute[185173]: 2026-01-23 12:07:26.663 185177 DEBUG nova.scheduler.client.report [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Refreshing inventories for resource provider 77dd020c-2f5c-40b0-b660-8a95a28aabbd _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 23 12:07:26 compute-0 nova_compute[185173]: 2026-01-23 12:07:26.682 185177 DEBUG nova.scheduler.client.report [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Updating ProviderTree inventory for provider 77dd020c-2f5c-40b0-b660-8a95a28aabbd from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 23 12:07:26 compute-0 nova_compute[185173]: 2026-01-23 12:07:26.682 185177 DEBUG nova.compute.provider_tree [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Updating inventory in ProviderTree for provider 77dd020c-2f5c-40b0-b660-8a95a28aabbd with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 23 12:07:26 compute-0 nova_compute[185173]: 2026-01-23 12:07:26.698 185177 DEBUG nova.scheduler.client.report [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Refreshing aggregate associations for resource provider 77dd020c-2f5c-40b0-b660-8a95a28aabbd, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 23 12:07:26 compute-0 nova_compute[185173]: 2026-01-23 12:07:26.723 185177 DEBUG nova.scheduler.client.report [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Refreshing trait associations for resource provider 77dd020c-2f5c-40b0-b660-8a95a28aabbd, traits: HW_CPU_X86_F16C,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_IDE,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_TRUSTED_CERTS,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_BMI,HW_CPU_X86_FMA3,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE42,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_ABM,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_AVX2,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_AVX,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SSE,HW_CPU_X86_BMI2,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE4A,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_MMX,HW_CPU_X86_SSE41,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_USB _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 23 12:07:26 compute-0 nova_compute[185173]: 2026-01-23 12:07:26.751 185177 DEBUG nova.compute.provider_tree [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Inventory has not changed in ProviderTree for provider: 77dd020c-2f5c-40b0-b660-8a95a28aabbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 23 12:07:26 compute-0 nova_compute[185173]: 2026-01-23 12:07:26.765 185177 DEBUG nova.scheduler.client.report [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Inventory has not changed for provider 77dd020c-2f5c-40b0-b660-8a95a28aabbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 23 12:07:26 compute-0 nova_compute[185173]: 2026-01-23 12:07:26.789 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 23 12:07:26 compute-0 nova_compute[185173]: 2026-01-23 12:07:26.790 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.212s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 12:07:27 compute-0 nova_compute[185173]: 2026-01-23 12:07:27.879 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:07:28 compute-0 podman[247397]: 2026-01-23 12:07:28.730783298 +0000 UTC m=+0.062903794 container health_status adf529ba1b6aae11f18bcfacdd7f5850af0b6e6af2250d4a705be9c346f3f5af (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team)
Jan 23 12:07:28 compute-0 podman[247396]: 2026-01-23 12:07:28.73614414 +0000 UTC m=+0.072047999 container health_status 900ef841977ab427bb05b895d10e0cac749b9185cccc7bb7aaf2b3886aa6449a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=kepler, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, io.buildah.version=1.29.0, vendor=Red Hat, Inc., managed_by=edpm_ansible, release=1214.1726694543, com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, io.openshift.expose-services=, version=9.4, distribution-scope=public, name=ubi9)
Jan 23 12:07:28 compute-0 nova_compute[185173]: 2026-01-23 12:07:28.790 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:07:28 compute-0 nova_compute[185173]: 2026-01-23 12:07:28.790 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 23 12:07:29 compute-0 nova_compute[185173]: 2026-01-23 12:07:29.103 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:07:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:07:29.123 106832 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 12:07:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:07:29.124 106832 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 12:07:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:07:29.124 106832 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 12:07:29 compute-0 nova_compute[185173]: 2026-01-23 12:07:29.219 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 23 12:07:29 compute-0 nova_compute[185173]: 2026-01-23 12:07:29.234 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:07:29 compute-0 nova_compute[185173]: 2026-01-23 12:07:29.235 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 23 12:07:29 compute-0 podman[201022]: time="2026-01-23T12:07:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 23 12:07:29 compute-0 podman[201022]: @ - - [23/Jan/2026:12:07:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 23 12:07:29 compute-0 podman[201022]: @ - - [23/Jan/2026:12:07:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3907 "" "Go-http-client/1.1"
Jan 23 12:07:30 compute-0 nova_compute[185173]: 2026-01-23 12:07:30.236 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:07:30 compute-0 nova_compute[185173]: 2026-01-23 12:07:30.238 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:07:31 compute-0 openstack_network_exporter[204160]: ERROR   12:07:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 23 12:07:31 compute-0 openstack_network_exporter[204160]: 
Jan 23 12:07:31 compute-0 openstack_network_exporter[204160]: ERROR   12:07:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 23 12:07:31 compute-0 openstack_network_exporter[204160]: 
Jan 23 12:07:32 compute-0 nova_compute[185173]: 2026-01-23 12:07:32.236 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:07:32 compute-0 nova_compute[185173]: 2026-01-23 12:07:32.883 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:07:34 compute-0 nova_compute[185173]: 2026-01-23 12:07:34.105 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:07:34 compute-0 podman[247433]: 2026-01-23 12:07:34.744539142 +0000 UTC m=+0.073023523 container health_status 99ee297e6e25b500e7af118e58bbafc761d2fd7202cdfcf4c976c2a99866b5ef (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 23 12:07:37 compute-0 ovn_controller[97581]: 2026-01-23T12:07:37Z|00064|memory_trim|INFO|Detected inactivity (last active 30007 ms ago): trimming memory
Jan 23 12:07:37 compute-0 nova_compute[185173]: 2026-01-23 12:07:37.885 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:07:39 compute-0 nova_compute[185173]: 2026-01-23 12:07:39.108 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:07:39 compute-0 sshd-session[247456]: Invalid user sol from 45.148.10.240 port 38432
Jan 23 12:07:39 compute-0 sshd-session[247456]: Connection closed by invalid user sol 45.148.10.240 port 38432 [preauth]
Jan 23 12:07:42 compute-0 podman[247458]: 2026-01-23 12:07:42.757981571 +0000 UTC m=+0.087210742 container health_status cde20f10ae383cce1365a41265bac0a75ea71c31a21a1539f187bef9d678e8d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=openstack_network_exporter, name=ubi9-minimal, distribution-scope=public, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., vcs-type=git, vendor=Red Hat, Inc., version=9.6)
Jan 23 12:07:42 compute-0 nova_compute[185173]: 2026-01-23 12:07:42.886 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:07:44 compute-0 nova_compute[185173]: 2026-01-23 12:07:44.112 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:07:47 compute-0 nova_compute[185173]: 2026-01-23 12:07:47.888 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:07:48 compute-0 podman[247477]: 2026-01-23 12:07:48.72633688 +0000 UTC m=+0.060355392 container health_status 48bfd3e93cfb033a8917f154ab637a84f3f60f7609564292c230ce848bae7693 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 23 12:07:48 compute-0 podman[247478]: 2026-01-23 12:07:48.727240132 +0000 UTC m=+0.058388033 container health_status 6ec039018dddd109dd56b3f3912ce4a80c166b5fb98c417c5e3cfbbdfbfbeaad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_id=ceilometer_agent_compute, org.label-schema.build-date=20260120, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute)
Jan 23 12:07:48 compute-0 podman[247479]: 2026-01-23 12:07:48.754164663 +0000 UTC m=+0.083206013 container health_status d96827cd9c29e53bbdf4cef10942608e4ba405294733072b4aa624c0238e2ed8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 23 12:07:49 compute-0 nova_compute[185173]: 2026-01-23 12:07:49.115 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:07:52 compute-0 nova_compute[185173]: 2026-01-23 12:07:52.891 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:07:54 compute-0 nova_compute[185173]: 2026-01-23 12:07:54.116 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:07:55 compute-0 podman[247537]: 2026-01-23 12:07:55.779419915 +0000 UTC m=+0.110823240 container health_status 1cc877fed4914980324cf4c0d6ba23743fd113442cee4d49cc1a59e402757170 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 23 12:07:57 compute-0 nova_compute[185173]: 2026-01-23 12:07:57.892 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:07:59 compute-0 nova_compute[185173]: 2026-01-23 12:07:59.119 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:07:59 compute-0 podman[247562]: 2026-01-23 12:07:59.737177262 +0000 UTC m=+0.069279550 container health_status 900ef841977ab427bb05b895d10e0cac749b9185cccc7bb7aaf2b3886aa6449a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, io.openshift.tags=base rhel9, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, com.redhat.component=ubi9-container, managed_by=edpm_ansible, architecture=x86_64, maintainer=Red Hat, Inc., release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.buildah.version=1.29.0, vendor=Red Hat, Inc., distribution-scope=public, config_id=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 23 12:07:59 compute-0 podman[201022]: time="2026-01-23T12:07:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 23 12:07:59 compute-0 podman[247563]: 2026-01-23 12:07:59.742564435 +0000 UTC m=+0.072481510 container health_status adf529ba1b6aae11f18bcfacdd7f5850af0b6e6af2250d4a705be9c346f3f5af (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.vendor=CentOS)
Jan 23 12:07:59 compute-0 podman[201022]: @ - - [23/Jan/2026:12:07:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 23 12:07:59 compute-0 podman[201022]: @ - - [23/Jan/2026:12:07:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3911 "" "Go-http-client/1.1"
Jan 23 12:08:01 compute-0 openstack_network_exporter[204160]: ERROR   12:08:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 23 12:08:01 compute-0 openstack_network_exporter[204160]: 
Jan 23 12:08:01 compute-0 openstack_network_exporter[204160]: ERROR   12:08:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 23 12:08:01 compute-0 openstack_network_exporter[204160]: 
Jan 23 12:08:02 compute-0 nova_compute[185173]: 2026-01-23 12:08:02.894 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:08:04 compute-0 nova_compute[185173]: 2026-01-23 12:08:04.120 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:08:05 compute-0 podman[247603]: 2026-01-23 12:08:05.720190192 +0000 UTC m=+0.056696832 container health_status 99ee297e6e25b500e7af118e58bbafc761d2fd7202cdfcf4c976c2a99866b5ef (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 23 12:08:07 compute-0 nova_compute[185173]: 2026-01-23 12:08:07.898 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:08:08 compute-0 nova_compute[185173]: 2026-01-23 12:08:08.420 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:08:09 compute-0 nova_compute[185173]: 2026-01-23 12:08:09.123 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:08:12 compute-0 nova_compute[185173]: 2026-01-23 12:08:12.900 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:08:13 compute-0 podman[247626]: 2026-01-23 12:08:13.718875768 +0000 UTC m=+0.055180005 container health_status cde20f10ae383cce1365a41265bac0a75ea71c31a21a1539f187bef9d678e8d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, managed_by=edpm_ansible, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, version=9.6, architecture=x86_64, name=ubi9-minimal, vcs-type=git, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., container_name=openstack_network_exporter)
Jan 23 12:08:14 compute-0 nova_compute[185173]: 2026-01-23 12:08:14.125 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:08:17 compute-0 nova_compute[185173]: 2026-01-23 12:08:17.902 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:08:19 compute-0 nova_compute[185173]: 2026-01-23 12:08:19.130 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:08:19 compute-0 podman[247646]: 2026-01-23 12:08:19.727030046 +0000 UTC m=+0.063330226 container health_status 48bfd3e93cfb033a8917f154ab637a84f3f60f7609564292c230ce848bae7693 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 23 12:08:19 compute-0 podman[247647]: 2026-01-23 12:08:19.736091528 +0000 UTC m=+0.068142704 container health_status 6ec039018dddd109dd56b3f3912ce4a80c166b5fb98c417c5e3cfbbdfbfbeaad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260120, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS)
Jan 23 12:08:19 compute-0 podman[247648]: 2026-01-23 12:08:19.749174949 +0000 UTC m=+0.074135861 container health_status d96827cd9c29e53bbdf4cef10942608e4ba405294733072b4aa624c0238e2ed8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, io.buildah.version=1.41.3)
Jan 23 12:08:22 compute-0 nova_compute[185173]: 2026-01-23 12:08:22.239 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:08:22 compute-0 nova_compute[185173]: 2026-01-23 12:08:22.904 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:08:23 compute-0 nova_compute[185173]: 2026-01-23 12:08:23.397 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:08:24 compute-0 nova_compute[185173]: 2026-01-23 12:08:24.132 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:08:26 compute-0 nova_compute[185173]: 2026-01-23 12:08:26.235 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:08:26 compute-0 nova_compute[185173]: 2026-01-23 12:08:26.236 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:08:26 compute-0 nova_compute[185173]: 2026-01-23 12:08:26.324 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 12:08:26 compute-0 nova_compute[185173]: 2026-01-23 12:08:26.325 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 12:08:26 compute-0 nova_compute[185173]: 2026-01-23 12:08:26.325 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 12:08:26 compute-0 nova_compute[185173]: 2026-01-23 12:08:26.326 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 23 12:08:26 compute-0 nova_compute[185173]: 2026-01-23 12:08:26.626 185177 WARNING nova.virt.libvirt.driver [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 23 12:08:26 compute-0 nova_compute[185173]: 2026-01-23 12:08:26.627 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5377MB free_disk=72.41660690307617GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 23 12:08:26 compute-0 nova_compute[185173]: 2026-01-23 12:08:26.628 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 12:08:26 compute-0 nova_compute[185173]: 2026-01-23 12:08:26.628 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 12:08:26 compute-0 podman[247703]: 2026-01-23 12:08:26.768493374 +0000 UTC m=+0.093908245 container health_status 1cc877fed4914980324cf4c0d6ba23743fd113442cee4d49cc1a59e402757170 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 23 12:08:26 compute-0 nova_compute[185173]: 2026-01-23 12:08:26.849 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 23 12:08:26 compute-0 nova_compute[185173]: 2026-01-23 12:08:26.849 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 23 12:08:26 compute-0 nova_compute[185173]: 2026-01-23 12:08:26.871 185177 DEBUG nova.compute.provider_tree [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Inventory has not changed in ProviderTree for provider: 77dd020c-2f5c-40b0-b660-8a95a28aabbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 23 12:08:26 compute-0 nova_compute[185173]: 2026-01-23 12:08:26.936 185177 DEBUG nova.scheduler.client.report [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Inventory has not changed for provider 77dd020c-2f5c-40b0-b660-8a95a28aabbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 23 12:08:26 compute-0 nova_compute[185173]: 2026-01-23 12:08:26.938 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 23 12:08:26 compute-0 nova_compute[185173]: 2026-01-23 12:08:26.938 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.310s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 12:08:27 compute-0 nova_compute[185173]: 2026-01-23 12:08:27.235 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:08:27 compute-0 nova_compute[185173]: 2026-01-23 12:08:27.236 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:08:27 compute-0 nova_compute[185173]: 2026-01-23 12:08:27.236 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 23 12:08:27 compute-0 nova_compute[185173]: 2026-01-23 12:08:27.578 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:08:27 compute-0 nova_compute[185173]: 2026-01-23 12:08:27.906 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:08:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:08:29.124 106832 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 12:08:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:08:29.125 106832 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 12:08:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:08:29.125 106832 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 12:08:29 compute-0 nova_compute[185173]: 2026-01-23 12:08:29.136 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:08:29 compute-0 podman[201022]: time="2026-01-23T12:08:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 23 12:08:29 compute-0 podman[201022]: @ - - [23/Jan/2026:12:08:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 23 12:08:29 compute-0 podman[201022]: @ - - [23/Jan/2026:12:08:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3912 "" "Go-http-client/1.1"
Jan 23 12:08:29 compute-0 nova_compute[185173]: 2026-01-23 12:08:29.882 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:08:29 compute-0 nova_compute[185173]: 2026-01-23 12:08:29.882 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 23 12:08:29 compute-0 nova_compute[185173]: 2026-01-23 12:08:29.882 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 23 12:08:29 compute-0 nova_compute[185173]: 2026-01-23 12:08:29.978 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 23 12:08:29 compute-0 nova_compute[185173]: 2026-01-23 12:08:29.978 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:08:29 compute-0 nova_compute[185173]: 2026-01-23 12:08:29.979 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 23 12:08:30 compute-0 nova_compute[185173]: 2026-01-23 12:08:30.235 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:08:30 compute-0 nova_compute[185173]: 2026-01-23 12:08:30.236 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:08:30 compute-0 podman[247730]: 2026-01-23 12:08:30.730748041 +0000 UTC m=+0.060910426 container health_status adf529ba1b6aae11f18bcfacdd7f5850af0b6e6af2250d4a705be9c346f3f5af (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3)
Jan 23 12:08:30 compute-0 podman[247729]: 2026-01-23 12:08:30.762758176 +0000 UTC m=+0.095394462 container health_status 900ef841977ab427bb05b895d10e0cac749b9185cccc7bb7aaf2b3886aa6449a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=base rhel9, io.buildah.version=1.29.0, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=kepler, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., architecture=x86_64, name=ubi9, container_name=kepler, distribution-scope=public, version=9.4, release=1214.1726694543, build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc., com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 23 12:08:31 compute-0 openstack_network_exporter[204160]: ERROR   12:08:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 23 12:08:31 compute-0 openstack_network_exporter[204160]: 
Jan 23 12:08:31 compute-0 openstack_network_exporter[204160]: ERROR   12:08:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 23 12:08:31 compute-0 openstack_network_exporter[204160]: 
Jan 23 12:08:32 compute-0 nova_compute[185173]: 2026-01-23 12:08:32.909 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:08:34 compute-0 nova_compute[185173]: 2026-01-23 12:08:34.139 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:08:34 compute-0 nova_compute[185173]: 2026-01-23 12:08:34.235 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:08:36 compute-0 podman[247767]: 2026-01-23 12:08:36.73007715 +0000 UTC m=+0.061895670 container health_status 99ee297e6e25b500e7af118e58bbafc761d2fd7202cdfcf4c976c2a99866b5ef (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 23 12:08:37 compute-0 nova_compute[185173]: 2026-01-23 12:08:37.230 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:08:37 compute-0 nova_compute[185173]: 2026-01-23 12:08:37.914 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:08:39 compute-0 nova_compute[185173]: 2026-01-23 12:08:39.141 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:08:42 compute-0 nova_compute[185173]: 2026-01-23 12:08:42.914 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:08:44 compute-0 nova_compute[185173]: 2026-01-23 12:08:44.145 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:08:44 compute-0 podman[247791]: 2026-01-23 12:08:44.25170808 +0000 UTC m=+0.078043976 container health_status cde20f10ae383cce1365a41265bac0a75ea71c31a21a1539f187bef9d678e8d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, container_name=openstack_network_exporter, architecture=x86_64, managed_by=edpm_ansible, io.openshift.expose-services=, config_id=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers)
Jan 23 12:08:46 compute-0 nova_compute[185173]: 2026-01-23 12:08:46.235 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:08:46 compute-0 nova_compute[185173]: 2026-01-23 12:08:46.236 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 23 12:08:46 compute-0 nova_compute[185173]: 2026-01-23 12:08:46.607 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 23 12:08:47 compute-0 nova_compute[185173]: 2026-01-23 12:08:47.918 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:08:49 compute-0 nova_compute[185173]: 2026-01-23 12:08:49.148 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:08:50 compute-0 podman[247813]: 2026-01-23 12:08:50.742413225 +0000 UTC m=+0.074164010 container health_status 48bfd3e93cfb033a8917f154ab637a84f3f60f7609564292c230ce848bae7693 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 23 12:08:50 compute-0 podman[247815]: 2026-01-23 12:08:50.745478451 +0000 UTC m=+0.071523976 container health_status d96827cd9c29e53bbdf4cef10942608e4ba405294733072b4aa624c0238e2ed8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 23 12:08:50 compute-0 podman[247814]: 2026-01-23 12:08:50.775888817 +0000 UTC m=+0.105863909 container health_status 6ec039018dddd109dd56b3f3912ce4a80c166b5fb98c417c5e3cfbbdfbfbeaad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20260120, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Jan 23 12:08:52 compute-0 nova_compute[185173]: 2026-01-23 12:08:52.918 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:08:54 compute-0 nova_compute[185173]: 2026-01-23 12:08:54.150 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:08:57 compute-0 podman[247874]: 2026-01-23 12:08:57.817387006 +0000 UTC m=+0.142953268 container health_status 1cc877fed4914980324cf4c0d6ba23743fd113442cee4d49cc1a59e402757170 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 23 12:08:57 compute-0 nova_compute[185173]: 2026-01-23 12:08:57.920 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:08:59 compute-0 nova_compute[185173]: 2026-01-23 12:08:59.152 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:08:59 compute-0 podman[201022]: time="2026-01-23T12:08:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 23 12:08:59 compute-0 podman[201022]: @ - - [23/Jan/2026:12:08:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 23 12:08:59 compute-0 podman[201022]: @ - - [23/Jan/2026:12:08:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3916 "" "Go-http-client/1.1"
Jan 23 12:09:01 compute-0 openstack_network_exporter[204160]: ERROR   12:09:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 23 12:09:01 compute-0 openstack_network_exporter[204160]: 
Jan 23 12:09:01 compute-0 openstack_network_exporter[204160]: ERROR   12:09:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 23 12:09:01 compute-0 openstack_network_exporter[204160]: 
Jan 23 12:09:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:09:01.457 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 23 12:09:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:09:01.458 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 23 12:09:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:09:01.458 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28411caae0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:09:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:09:01.458 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f28410bc7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:09:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:09:01.458 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be810>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28411caae0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:09:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:09:01.459 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be840>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28411caae0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:09:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:09:01.459 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28411caae0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:09:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:09:01.459 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be8a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28411caae0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:09:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:09:01.459 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc8f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28411caae0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:09:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:09:01.459 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be900>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28411caae0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:09:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:09:01.459 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bf140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28411caae0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:09:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:09:01.459 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be960>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28411caae0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:09:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:09:01.459 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2842f61190>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28411caae0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:09:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:09:01.459 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28411c9190>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28411caae0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:09:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:09:01.459 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be9c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28411caae0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:09:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:09:01.460 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bf1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28411caae0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:09:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:09:01.460 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bec00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28411caae0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:09:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:09:01.460 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bf440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28411caae0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:09:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:09:01.460 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bec60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28411caae0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:09:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:09:01.460 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2842f83560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28411caae0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:09:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:09:01.460 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28411caae0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:09:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:09:01.460 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc5c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28411caae0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:09:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:09:01.460 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28411caae0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:09:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:09:01.460 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be660>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28411caae0>] with cache [{}], pollster history [{'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:09:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:09:01.461 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28411caae0>] with cache [{}], pollster history [{'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:09:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:09:01.461 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28411caae0>] with cache [{}], pollster history [{'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:09:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:09:01.461 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2842f1af60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28411caae0>] with cache [{}], pollster history [{'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:09:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:09:01.460 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 12:09:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:09:01.461 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f28410be7e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:09:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:09:01.461 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 12:09:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:09:01.461 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f28411c9b80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:09:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:09:01.461 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28411caae0>] with cache [{}], pollster history [{'network.outgoing.bytes.delta': [], 'disk.device.usage': [], 'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:09:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:09:01.462 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be7b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28411caae0>] with cache [{}], pollster history [{'network.outgoing.bytes.delta': [], 'disk.device.usage': [], 'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:09:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:09:01.461 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 12:09:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:09:01.462 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f28410bc830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:09:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:09:01.462 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 12:09:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:09:01.463 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f28410be870>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:09:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:09:01.463 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 12:09:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:09:01.463 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f28410bc8c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:09:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:09:01.463 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 12:09:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:09:01.464 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f28410be8d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:09:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:09:01.464 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 12:09:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:09:01.464 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f28410bef30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:09:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:09:01.464 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 12:09:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:09:01.464 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f28410be930>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:09:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:09:01.465 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 12:09:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:09:01.465 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f28410be750>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:09:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:09:01.465 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 12:09:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:09:01.465 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f28411a4c50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:09:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:09:01.465 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 12:09:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:09:01.466 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f28410be990>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:09:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:09:01.466 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 12:09:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:09:01.466 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f28410bf1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:09:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:09:01.466 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 12:09:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:09:01.466 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f28410bebd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:09:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:09:01.467 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 12:09:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:09:01.467 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f28410bf410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:09:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:09:01.467 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 12:09:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:09:01.467 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f28410bec30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:09:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:09:01.468 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 12:09:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:09:01.468 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f28410bcfb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:09:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:09:01.468 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 12:09:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:09:01.468 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f28410bc920>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:09:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:09:01.468 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 12:09:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:09:01.469 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f28410bc5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:09:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:09:01.469 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 12:09:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:09:01.469 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f28410bc890>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:09:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:09:01.469 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 12:09:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:09:01.470 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f28410be720>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:09:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:09:01.470 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 12:09:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:09:01.470 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f28410bc6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:09:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:09:01.470 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 12:09:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:09:01.470 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f28410bec90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:09:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:09:01.471 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 12:09:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:09:01.471 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f284322b260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:09:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:09:01.471 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 12:09:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:09:01.471 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f28410bc740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:09:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:09:01.472 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 12:09:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:09:01.472 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f28410be780>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:09:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:09:01.472 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 12:09:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:09:01.472 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:09:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:09:01.472 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:09:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:09:01.473 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:09:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:09:01.473 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:09:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:09:01.473 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:09:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:09:01.473 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:09:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:09:01.473 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:09:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:09:01.473 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:09:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:09:01.473 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:09:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:09:01.473 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:09:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:09:01.473 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:09:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:09:01.473 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:09:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:09:01.473 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:09:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:09:01.473 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:09:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:09:01.473 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:09:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:09:01.473 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:09:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:09:01.474 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:09:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:09:01.474 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:09:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:09:01.474 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:09:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:09:01.474 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:09:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:09:01.474 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:09:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:09:01.474 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:09:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:09:01.474 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:09:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:09:01.474 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:09:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:09:01.474 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:09:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:09:01.474 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:09:01 compute-0 podman[247903]: 2026-01-23 12:09:01.735332908 +0000 UTC m=+0.067320632 container health_status adf529ba1b6aae11f18bcfacdd7f5850af0b6e6af2250d4a705be9c346f3f5af (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 23 12:09:01 compute-0 podman[247902]: 2026-01-23 12:09:01.760188798 +0000 UTC m=+0.095450793 container health_status 900ef841977ab427bb05b895d10e0cac749b9185cccc7bb7aaf2b3886aa6449a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, name=ubi9, vcs-type=git, config_id=kepler, release=1214.1726694543, container_name=kepler, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.4, build-date=2024-09-18T21:23:30, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, distribution-scope=public, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.expose-services=, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, architecture=x86_64)
Jan 23 12:09:02 compute-0 nova_compute[185173]: 2026-01-23 12:09:02.923 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:09:04 compute-0 nova_compute[185173]: 2026-01-23 12:09:04.155 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:09:07 compute-0 podman[247938]: 2026-01-23 12:09:07.761181939 +0000 UTC m=+0.088941494 container health_status 99ee297e6e25b500e7af118e58bbafc761d2fd7202cdfcf4c976c2a99866b5ef (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 23 12:09:07 compute-0 nova_compute[185173]: 2026-01-23 12:09:07.923 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:09:09 compute-0 nova_compute[185173]: 2026-01-23 12:09:09.158 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:09:12 compute-0 nova_compute[185173]: 2026-01-23 12:09:12.926 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:09:14 compute-0 nova_compute[185173]: 2026-01-23 12:09:14.162 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:09:14 compute-0 podman[247962]: 2026-01-23 12:09:14.75222201 +0000 UTC m=+0.079054850 container health_status cde20f10ae383cce1365a41265bac0a75ea71c31a21a1539f187bef9d678e8d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, version=9.6, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, release=1755695350, config_id=openstack_network_exporter, managed_by=edpm_ansible, vcs-type=git, architecture=x86_64, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Jan 23 12:09:17 compute-0 nova_compute[185173]: 2026-01-23 12:09:17.927 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:09:19 compute-0 nova_compute[185173]: 2026-01-23 12:09:19.166 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:09:21 compute-0 podman[247983]: 2026-01-23 12:09:21.728625264 +0000 UTC m=+0.063760586 container health_status 48bfd3e93cfb033a8917f154ab637a84f3f60f7609564292c230ce848bae7693 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 23 12:09:21 compute-0 podman[247984]: 2026-01-23 12:09:21.733153835 +0000 UTC m=+0.065019677 container health_status 6ec039018dddd109dd56b3f3912ce4a80c166b5fb98c417c5e3cfbbdfbfbeaad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20260120, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 23 12:09:21 compute-0 podman[247985]: 2026-01-23 12:09:21.775688148 +0000 UTC m=+0.100592879 container health_status d96827cd9c29e53bbdf4cef10942608e4ba405294733072b4aa624c0238e2ed8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3)
Jan 23 12:09:22 compute-0 nova_compute[185173]: 2026-01-23 12:09:22.609 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:09:22 compute-0 nova_compute[185173]: 2026-01-23 12:09:22.929 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:09:24 compute-0 nova_compute[185173]: 2026-01-23 12:09:24.167 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:09:26 compute-0 nova_compute[185173]: 2026-01-23 12:09:26.230 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:09:27 compute-0 nova_compute[185173]: 2026-01-23 12:09:27.234 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:09:27 compute-0 nova_compute[185173]: 2026-01-23 12:09:27.270 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 12:09:27 compute-0 nova_compute[185173]: 2026-01-23 12:09:27.271 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 12:09:27 compute-0 nova_compute[185173]: 2026-01-23 12:09:27.271 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 12:09:27 compute-0 nova_compute[185173]: 2026-01-23 12:09:27.271 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 23 12:09:27 compute-0 nova_compute[185173]: 2026-01-23 12:09:27.588 185177 WARNING nova.virt.libvirt.driver [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 23 12:09:27 compute-0 nova_compute[185173]: 2026-01-23 12:09:27.589 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5377MB free_disk=72.41660690307617GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 23 12:09:27 compute-0 nova_compute[185173]: 2026-01-23 12:09:27.589 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 12:09:27 compute-0 nova_compute[185173]: 2026-01-23 12:09:27.589 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 12:09:27 compute-0 nova_compute[185173]: 2026-01-23 12:09:27.930 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:09:28 compute-0 nova_compute[185173]: 2026-01-23 12:09:28.583 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 23 12:09:28 compute-0 nova_compute[185173]: 2026-01-23 12:09:28.584 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 23 12:09:28 compute-0 nova_compute[185173]: 2026-01-23 12:09:28.718 185177 DEBUG nova.compute.provider_tree [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Inventory has not changed in ProviderTree for provider: 77dd020c-2f5c-40b0-b660-8a95a28aabbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 23 12:09:28 compute-0 nova_compute[185173]: 2026-01-23 12:09:28.732 185177 DEBUG nova.scheduler.client.report [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Inventory has not changed for provider 77dd020c-2f5c-40b0-b660-8a95a28aabbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 23 12:09:28 compute-0 nova_compute[185173]: 2026-01-23 12:09:28.733 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 23 12:09:28 compute-0 nova_compute[185173]: 2026-01-23 12:09:28.733 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.144s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 12:09:28 compute-0 podman[248040]: 2026-01-23 12:09:28.831424197 +0000 UTC m=+0.163617395 container health_status 1cc877fed4914980324cf4c0d6ba23743fd113442cee4d49cc1a59e402757170 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3)
Jan 23 12:09:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:09:29.126 106832 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 12:09:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:09:29.127 106832 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 12:09:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:09:29.127 106832 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 12:09:29 compute-0 nova_compute[185173]: 2026-01-23 12:09:29.170 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:09:29 compute-0 nova_compute[185173]: 2026-01-23 12:09:29.733 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:09:29 compute-0 podman[201022]: time="2026-01-23T12:09:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 23 12:09:29 compute-0 podman[201022]: @ - - [23/Jan/2026:12:09:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 23 12:09:29 compute-0 podman[201022]: @ - - [23/Jan/2026:12:09:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3913 "" "Go-http-client/1.1"
Jan 23 12:09:30 compute-0 nova_compute[185173]: 2026-01-23 12:09:30.234 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:09:31 compute-0 nova_compute[185173]: 2026-01-23 12:09:31.235 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:09:31 compute-0 nova_compute[185173]: 2026-01-23 12:09:31.235 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 23 12:09:31 compute-0 nova_compute[185173]: 2026-01-23 12:09:31.236 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 23 12:09:31 compute-0 nova_compute[185173]: 2026-01-23 12:09:31.253 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 23 12:09:31 compute-0 nova_compute[185173]: 2026-01-23 12:09:31.254 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:09:31 compute-0 nova_compute[185173]: 2026-01-23 12:09:31.254 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:09:31 compute-0 nova_compute[185173]: 2026-01-23 12:09:31.255 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 23 12:09:31 compute-0 openstack_network_exporter[204160]: ERROR   12:09:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 23 12:09:31 compute-0 openstack_network_exporter[204160]: 
Jan 23 12:09:31 compute-0 openstack_network_exporter[204160]: ERROR   12:09:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 23 12:09:31 compute-0 openstack_network_exporter[204160]: 
Jan 23 12:09:32 compute-0 podman[248064]: 2026-01-23 12:09:32.728376994 +0000 UTC m=+0.063323215 container health_status 900ef841977ab427bb05b895d10e0cac749b9185cccc7bb7aaf2b3886aa6449a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, name=ubi9, com.redhat.component=ubi9-container, release=1214.1726694543, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=kepler, container_name=kepler, release-0.7.12=, architecture=x86_64, distribution-scope=public, io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30)
Jan 23 12:09:32 compute-0 podman[248065]: 2026-01-23 12:09:32.747144304 +0000 UTC m=+0.075453142 container health_status adf529ba1b6aae11f18bcfacdd7f5850af0b6e6af2250d4a705be9c346f3f5af (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 23 12:09:32 compute-0 nova_compute[185173]: 2026-01-23 12:09:32.932 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:09:34 compute-0 nova_compute[185173]: 2026-01-23 12:09:34.173 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:09:36 compute-0 nova_compute[185173]: 2026-01-23 12:09:36.235 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:09:37 compute-0 nova_compute[185173]: 2026-01-23 12:09:37.933 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:09:38 compute-0 podman[248100]: 2026-01-23 12:09:38.746182708 +0000 UTC m=+0.076493979 container health_status 99ee297e6e25b500e7af118e58bbafc761d2fd7202cdfcf4c976c2a99866b5ef (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 23 12:09:39 compute-0 nova_compute[185173]: 2026-01-23 12:09:39.175 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:09:42 compute-0 nova_compute[185173]: 2026-01-23 12:09:42.936 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:09:44 compute-0 nova_compute[185173]: 2026-01-23 12:09:44.177 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:09:45 compute-0 podman[248123]: 2026-01-23 12:09:45.751910443 +0000 UTC m=+0.084228218 container health_status cde20f10ae383cce1365a41265bac0a75ea71c31a21a1539f187bef9d678e8d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, release=1755695350, io.openshift.expose-services=, config_id=openstack_network_exporter, managed_by=edpm_ansible, vcs-type=git, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., architecture=x86_64, build-date=2025-08-20T13:12:41)
Jan 23 12:09:47 compute-0 nova_compute[185173]: 2026-01-23 12:09:47.939 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:09:49 compute-0 nova_compute[185173]: 2026-01-23 12:09:49.179 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:09:52 compute-0 podman[248146]: 2026-01-23 12:09:52.72930459 +0000 UTC m=+0.059539792 container health_status d96827cd9c29e53bbdf4cef10942608e4ba405294733072b4aa624c0238e2ed8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 23 12:09:52 compute-0 podman[248144]: 2026-01-23 12:09:52.749610338 +0000 UTC m=+0.084693389 container health_status 48bfd3e93cfb033a8917f154ab637a84f3f60f7609564292c230ce848bae7693 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 23 12:09:52 compute-0 podman[248145]: 2026-01-23 12:09:52.763058528 +0000 UTC m=+0.094353106 container health_status 6ec039018dddd109dd56b3f3912ce4a80c166b5fb98c417c5e3cfbbdfbfbeaad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20260120, org.label-schema.license=GPLv2, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image)
Jan 23 12:09:52 compute-0 nova_compute[185173]: 2026-01-23 12:09:52.941 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:09:54 compute-0 nova_compute[185173]: 2026-01-23 12:09:54.181 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:09:54 compute-0 sshd-session[248203]: Invalid user solana from 45.148.10.240 port 49236
Jan 23 12:09:54 compute-0 sshd-session[248203]: Connection closed by invalid user solana 45.148.10.240 port 49236 [preauth]
Jan 23 12:09:57 compute-0 nova_compute[185173]: 2026-01-23 12:09:57.942 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:09:59 compute-0 nova_compute[185173]: 2026-01-23 12:09:59.184 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:09:59 compute-0 podman[201022]: time="2026-01-23T12:09:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 23 12:09:59 compute-0 podman[201022]: @ - - [23/Jan/2026:12:09:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 23 12:09:59 compute-0 podman[201022]: @ - - [23/Jan/2026:12:09:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3915 "" "Go-http-client/1.1"
Jan 23 12:09:59 compute-0 podman[248205]: 2026-01-23 12:09:59.7788466 +0000 UTC m=+0.107424183 container health_status 1cc877fed4914980324cf4c0d6ba23743fd113442cee4d49cc1a59e402757170 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller)
Jan 23 12:10:01 compute-0 openstack_network_exporter[204160]: ERROR   12:10:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 23 12:10:01 compute-0 openstack_network_exporter[204160]: 
Jan 23 12:10:01 compute-0 openstack_network_exporter[204160]: ERROR   12:10:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 23 12:10:01 compute-0 openstack_network_exporter[204160]: 
Jan 23 12:10:02 compute-0 nova_compute[185173]: 2026-01-23 12:10:02.945 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:10:03 compute-0 podman[248231]: 2026-01-23 12:10:03.743885255 +0000 UTC m=+0.068109228 container health_status adf529ba1b6aae11f18bcfacdd7f5850af0b6e6af2250d4a705be9c346f3f5af (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_ipmi)
Jan 23 12:10:03 compute-0 podman[248230]: 2026-01-23 12:10:03.770865891 +0000 UTC m=+0.099601047 container health_status 900ef841977ab427bb05b895d10e0cac749b9185cccc7bb7aaf2b3886aa6449a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release=1214.1726694543, vcs-type=git, architecture=x86_64, config_id=kepler, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, name=ubi9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, vendor=Red Hat, Inc., container_name=kepler, distribution-scope=public, io.openshift.tags=base rhel9, com.redhat.component=ubi9-container)
Jan 23 12:10:04 compute-0 nova_compute[185173]: 2026-01-23 12:10:04.187 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:10:07 compute-0 nova_compute[185173]: 2026-01-23 12:10:07.946 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:10:09 compute-0 nova_compute[185173]: 2026-01-23 12:10:09.189 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:10:09 compute-0 podman[248269]: 2026-01-23 12:10:09.744454665 +0000 UTC m=+0.071493972 container health_status 99ee297e6e25b500e7af118e58bbafc761d2fd7202cdfcf4c976c2a99866b5ef (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 23 12:10:12 compute-0 nova_compute[185173]: 2026-01-23 12:10:12.951 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:10:14 compute-0 nova_compute[185173]: 2026-01-23 12:10:14.192 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:10:16 compute-0 podman[248292]: 2026-01-23 12:10:16.756548983 +0000 UTC m=+0.083499924 container health_status cde20f10ae383cce1365a41265bac0a75ea71c31a21a1539f187bef9d678e8d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, config_id=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, distribution-scope=public, version=9.6, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 23 12:10:17 compute-0 nova_compute[185173]: 2026-01-23 12:10:17.952 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:10:19 compute-0 nova_compute[185173]: 2026-01-23 12:10:19.193 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:10:22 compute-0 nova_compute[185173]: 2026-01-23 12:10:22.955 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:10:23 compute-0 nova_compute[185173]: 2026-01-23 12:10:23.235 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:10:23 compute-0 podman[248314]: 2026-01-23 12:10:23.737335503 +0000 UTC m=+0.073374120 container health_status 48bfd3e93cfb033a8917f154ab637a84f3f60f7609564292c230ce848bae7693 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 23 12:10:23 compute-0 podman[248316]: 2026-01-23 12:10:23.737307632 +0000 UTC m=+0.066606350 container health_status d96827cd9c29e53bbdf4cef10942608e4ba405294733072b4aa624c0238e2ed8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 12:10:23 compute-0 podman[248315]: 2026-01-23 12:10:23.745224601 +0000 UTC m=+0.077786240 container health_status 6ec039018dddd109dd56b3f3912ce4a80c166b5fb98c417c5e3cfbbdfbfbeaad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, config_id=ceilometer_agent_compute, org.label-schema.build-date=20260120, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 23 12:10:24 compute-0 nova_compute[185173]: 2026-01-23 12:10:24.196 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:10:26 compute-0 nova_compute[185173]: 2026-01-23 12:10:26.230 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:10:27 compute-0 nova_compute[185173]: 2026-01-23 12:10:27.235 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:10:27 compute-0 nova_compute[185173]: 2026-01-23 12:10:27.285 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 12:10:27 compute-0 nova_compute[185173]: 2026-01-23 12:10:27.285 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 12:10:27 compute-0 nova_compute[185173]: 2026-01-23 12:10:27.286 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 12:10:27 compute-0 nova_compute[185173]: 2026-01-23 12:10:27.286 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 23 12:10:27 compute-0 nova_compute[185173]: 2026-01-23 12:10:27.586 185177 WARNING nova.virt.libvirt.driver [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 23 12:10:27 compute-0 nova_compute[185173]: 2026-01-23 12:10:27.588 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5374MB free_disk=72.41539764404297GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 23 12:10:27 compute-0 nova_compute[185173]: 2026-01-23 12:10:27.588 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 12:10:27 compute-0 nova_compute[185173]: 2026-01-23 12:10:27.589 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 12:10:27 compute-0 nova_compute[185173]: 2026-01-23 12:10:27.958 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:10:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:10:29.126 106832 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 12:10:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:10:29.127 106832 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 12:10:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:10:29.127 106832 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 12:10:29 compute-0 nova_compute[185173]: 2026-01-23 12:10:29.199 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:10:29 compute-0 podman[201022]: time="2026-01-23T12:10:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 23 12:10:29 compute-0 podman[201022]: @ - - [23/Jan/2026:12:10:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 23 12:10:29 compute-0 podman[201022]: @ - - [23/Jan/2026:12:10:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3918 "" "Go-http-client/1.1"
Jan 23 12:10:30 compute-0 nova_compute[185173]: 2026-01-23 12:10:30.124 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 23 12:10:30 compute-0 nova_compute[185173]: 2026-01-23 12:10:30.124 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 23 12:10:30 compute-0 nova_compute[185173]: 2026-01-23 12:10:30.330 185177 DEBUG nova.compute.provider_tree [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Inventory has not changed in ProviderTree for provider: 77dd020c-2f5c-40b0-b660-8a95a28aabbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 23 12:10:30 compute-0 nova_compute[185173]: 2026-01-23 12:10:30.401 185177 DEBUG nova.scheduler.client.report [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Inventory has not changed for provider 77dd020c-2f5c-40b0-b660-8a95a28aabbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 23 12:10:30 compute-0 nova_compute[185173]: 2026-01-23 12:10:30.403 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 23 12:10:30 compute-0 nova_compute[185173]: 2026-01-23 12:10:30.403 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.815s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 12:10:30 compute-0 podman[248374]: 2026-01-23 12:10:30.756590839 +0000 UTC m=+0.093908164 container health_status 1cc877fed4914980324cf4c0d6ba23743fd113442cee4d49cc1a59e402757170 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller)
Jan 23 12:10:31 compute-0 openstack_network_exporter[204160]: ERROR   12:10:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 23 12:10:31 compute-0 openstack_network_exporter[204160]: 
Jan 23 12:10:31 compute-0 openstack_network_exporter[204160]: ERROR   12:10:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 23 12:10:31 compute-0 openstack_network_exporter[204160]: 
Jan 23 12:10:32 compute-0 nova_compute[185173]: 2026-01-23 12:10:32.404 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:10:32 compute-0 nova_compute[185173]: 2026-01-23 12:10:32.404 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 23 12:10:32 compute-0 nova_compute[185173]: 2026-01-23 12:10:32.404 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 23 12:10:32 compute-0 nova_compute[185173]: 2026-01-23 12:10:32.960 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:10:33 compute-0 nova_compute[185173]: 2026-01-23 12:10:33.321 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 23 12:10:33 compute-0 nova_compute[185173]: 2026-01-23 12:10:33.322 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:10:33 compute-0 nova_compute[185173]: 2026-01-23 12:10:33.322 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:10:33 compute-0 nova_compute[185173]: 2026-01-23 12:10:33.322 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:10:33 compute-0 nova_compute[185173]: 2026-01-23 12:10:33.322 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:10:33 compute-0 nova_compute[185173]: 2026-01-23 12:10:33.322 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 23 12:10:34 compute-0 nova_compute[185173]: 2026-01-23 12:10:34.202 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:10:34 compute-0 podman[248400]: 2026-01-23 12:10:34.737729457 +0000 UTC m=+0.073788270 container health_status 900ef841977ab427bb05b895d10e0cac749b9185cccc7bb7aaf2b3886aa6449a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=kepler, vcs-type=git, container_name=kepler, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, name=ubi9, release-0.7.12=, version=9.4, release=1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, distribution-scope=public, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, io.openshift.tags=base rhel9)
Jan 23 12:10:34 compute-0 podman[248401]: 2026-01-23 12:10:34.754872047 +0000 UTC m=+0.088700524 container health_status adf529ba1b6aae11f18bcfacdd7f5850af0b6e6af2250d4a705be9c346f3f5af (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 23 12:10:37 compute-0 nova_compute[185173]: 2026-01-23 12:10:37.235 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:10:37 compute-0 nova_compute[185173]: 2026-01-23 12:10:37.963 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:10:39 compute-0 nova_compute[185173]: 2026-01-23 12:10:39.205 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:10:40 compute-0 podman[248438]: 2026-01-23 12:10:40.724478541 +0000 UTC m=+0.051612714 container health_status 99ee297e6e25b500e7af118e58bbafc761d2fd7202cdfcf4c976c2a99866b5ef (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 23 12:10:42 compute-0 nova_compute[185173]: 2026-01-23 12:10:42.230 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:10:42 compute-0 nova_compute[185173]: 2026-01-23 12:10:42.965 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:10:44 compute-0 nova_compute[185173]: 2026-01-23 12:10:44.208 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:10:47 compute-0 podman[248461]: 2026-01-23 12:10:47.757663066 +0000 UTC m=+0.088421627 container health_status cde20f10ae383cce1365a41265bac0a75ea71c31a21a1539f187bef9d678e8d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, vcs-type=git, vendor=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, name=ubi9-minimal, architecture=x86_64, config_id=openstack_network_exporter, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, release=1755695350, container_name=openstack_network_exporter)
Jan 23 12:10:47 compute-0 nova_compute[185173]: 2026-01-23 12:10:47.967 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:10:49 compute-0 nova_compute[185173]: 2026-01-23 12:10:49.210 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:10:52 compute-0 nova_compute[185173]: 2026-01-23 12:10:52.971 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:10:54 compute-0 nova_compute[185173]: 2026-01-23 12:10:54.213 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:10:54 compute-0 podman[248485]: 2026-01-23 12:10:54.728425696 +0000 UTC m=+0.053110092 container health_status d96827cd9c29e53bbdf4cef10942608e4ba405294733072b4aa624c0238e2ed8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 23 12:10:54 compute-0 podman[248483]: 2026-01-23 12:10:54.755094314 +0000 UTC m=+0.085639667 container health_status 48bfd3e93cfb033a8917f154ab637a84f3f60f7609564292c230ce848bae7693 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 23 12:10:54 compute-0 podman[248484]: 2026-01-23 12:10:54.758196642 +0000 UTC m=+0.087071383 container health_status 6ec039018dddd109dd56b3f3912ce4a80c166b5fb98c417c5e3cfbbdfbfbeaad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260120, config_id=ceilometer_agent_compute, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Jan 23 12:10:57 compute-0 nova_compute[185173]: 2026-01-23 12:10:57.974 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:10:59 compute-0 nova_compute[185173]: 2026-01-23 12:10:59.215 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:10:59 compute-0 podman[201022]: time="2026-01-23T12:10:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 23 12:10:59 compute-0 podman[201022]: @ - - [23/Jan/2026:12:10:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 23 12:10:59 compute-0 podman[201022]: @ - - [23/Jan/2026:12:10:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3905 "" "Go-http-client/1.1"
Jan 23 12:11:01 compute-0 openstack_network_exporter[204160]: ERROR   12:11:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 23 12:11:01 compute-0 openstack_network_exporter[204160]: 
Jan 23 12:11:01 compute-0 openstack_network_exporter[204160]: ERROR   12:11:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 23 12:11:01 compute-0 openstack_network_exporter[204160]: 
Jan 23 12:11:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:11:01.458 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 23 12:11:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:11:01.458 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 23 12:11:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:11:01.458 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28411caae0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:11:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:11:01.459 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f28410bc7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:11:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:11:01.459 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be810>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28411caae0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:11:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:11:01.460 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be840>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28411caae0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:11:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:11:01.460 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28411caae0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:11:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:11:01.460 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be8a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28411caae0>] with cache [{}], pollster history [{'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:11:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:11:01.460 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc8f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28411caae0>] with cache [{}], pollster history [{'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:11:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:11:01.460 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 12:11:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:11:01.461 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f28410be7e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:11:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:11:01.461 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 12:11:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:11:01.461 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f28411c9b80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:11:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:11:01.461 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 12:11:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:11:01.461 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be900>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28411caae0>] with cache [{}], pollster history [{'network.outgoing.bytes.delta': [], 'disk.device.usage': [], 'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:11:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:11:01.461 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f28410bc830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:11:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:11:01.462 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 12:11:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:11:01.462 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bf140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28411caae0>] with cache [{}], pollster history [{'network.outgoing.bytes.delta': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'network.outgoing.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:11:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:11:01.462 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f28410be870>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:11:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:11:01.462 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be960>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28411caae0>] with cache [{}], pollster history [{'network.outgoing.bytes.delta': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'network.outgoing.bytes.rate': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:11:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:11:01.463 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 12:11:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:11:01.463 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2842f61190>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28411caae0>] with cache [{}], pollster history [{'network.outgoing.bytes.delta': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'network.outgoing.bytes.rate': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:11:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:11:01.463 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f28410bc8c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:11:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:11:01.464 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 12:11:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:11:01.464 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f28410be8d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:11:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:11:01.464 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 12:11:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:11:01.464 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f28410bef30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:11:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:11:01.464 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 12:11:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:11:01.464 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28411c9190>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28411caae0>] with cache [{}], pollster history [{'network.outgoing.bytes.delta': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'network.outgoing.bytes.rate': [], 'disk.device.write.latency': [], 'network.incoming.packets.error': [], 'disk.device.write.requests': [], 'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:11:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:11:01.464 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f28410be930>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:11:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:11:01.465 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 12:11:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:11:01.465 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be9c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28411caae0>] with cache [{}], pollster history [{'network.outgoing.bytes.delta': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'network.outgoing.bytes.rate': [], 'disk.device.write.latency': [], 'network.incoming.packets.error': [], 'disk.device.write.requests': [], 'network.outgoing.packets.drop': [], 'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:11:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:11:01.465 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bf1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28411caae0>] with cache [{}], pollster history [{'network.outgoing.bytes.delta': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'network.outgoing.bytes.rate': [], 'disk.device.write.latency': [], 'network.incoming.packets.error': [], 'disk.device.write.requests': [], 'network.outgoing.packets.drop': [], 'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:11:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:11:01.465 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bec00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28411caae0>] with cache [{}], pollster history [{'network.outgoing.bytes.delta': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'network.outgoing.bytes.rate': [], 'disk.device.write.latency': [], 'network.incoming.packets.error': [], 'disk.device.write.requests': [], 'network.outgoing.packets.drop': [], 'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:11:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:11:01.466 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bf440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28411caae0>] with cache [{}], pollster history [{'network.outgoing.bytes.delta': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'network.outgoing.bytes.rate': [], 'disk.device.write.latency': [], 'network.incoming.packets.error': [], 'disk.device.write.requests': [], 'network.outgoing.packets.drop': [], 'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:11:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:11:01.466 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bec60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28411caae0>] with cache [{}], pollster history [{'network.outgoing.bytes.delta': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'network.outgoing.bytes.rate': [], 'disk.device.write.latency': [], 'network.incoming.packets.error': [], 'disk.device.write.requests': [], 'network.outgoing.packets.drop': [], 'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:11:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:11:01.465 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f28410be750>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:11:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:11:01.466 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 12:11:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:11:01.466 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2842f83560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28411caae0>] with cache [{}], pollster history [{'network.outgoing.bytes.delta': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'network.outgoing.bytes.rate': [], 'disk.device.write.latency': [], 'network.incoming.packets.error': [], 'disk.device.write.requests': [], 'network.outgoing.packets.drop': [], 'disk.ephemeral.size': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:11:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:11:01.467 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28411caae0>] with cache [{}], pollster history [{'network.outgoing.bytes.delta': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'network.outgoing.bytes.rate': [], 'disk.device.write.latency': [], 'network.incoming.packets.error': [], 'disk.device.write.requests': [], 'network.outgoing.packets.drop': [], 'disk.ephemeral.size': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:11:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:11:01.467 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc5c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28411caae0>] with cache [{}], pollster history [{'network.outgoing.bytes.delta': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'network.outgoing.bytes.rate': [], 'disk.device.write.latency': [], 'network.incoming.packets.error': [], 'disk.device.write.requests': [], 'network.outgoing.packets.drop': [], 'disk.ephemeral.size': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:11:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:11:01.467 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28411caae0>] with cache [{}], pollster history [{'network.outgoing.bytes.delta': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'network.outgoing.bytes.rate': [], 'disk.device.write.latency': [], 'network.incoming.packets.error': [], 'disk.device.write.requests': [], 'network.outgoing.packets.drop': [], 'disk.ephemeral.size': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:11:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:11:01.467 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be660>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28411caae0>] with cache [{}], pollster history [{'network.outgoing.bytes.delta': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'network.outgoing.bytes.rate': [], 'disk.device.write.latency': [], 'network.incoming.packets.error': [], 'disk.device.write.requests': [], 'network.outgoing.packets.drop': [], 'disk.ephemeral.size': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:11:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:11:01.467 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28411caae0>] with cache [{}], pollster history [{'network.outgoing.bytes.delta': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'network.outgoing.bytes.rate': [], 'disk.device.write.latency': [], 'network.incoming.packets.error': [], 'disk.device.write.requests': [], 'network.outgoing.packets.drop': [], 'disk.ephemeral.size': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:11:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:11:01.467 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28411caae0>] with cache [{}], pollster history [{'network.outgoing.bytes.delta': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'network.outgoing.bytes.rate': [], 'disk.device.write.latency': [], 'network.incoming.packets.error': [], 'disk.device.write.requests': [], 'network.outgoing.packets.drop': [], 'disk.ephemeral.size': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:11:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:11:01.468 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2842f1af60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28411caae0>] with cache [{}], pollster history [{'network.outgoing.bytes.delta': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'network.outgoing.bytes.rate': [], 'disk.device.write.latency': [], 'network.incoming.packets.error': [], 'disk.device.write.requests': [], 'network.outgoing.packets.drop': [], 'disk.ephemeral.size': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:11:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:11:01.468 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28411caae0>] with cache [{}], pollster history [{'network.outgoing.bytes.delta': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'network.outgoing.bytes.rate': [], 'disk.device.write.latency': [], 'network.incoming.packets.error': [], 'disk.device.write.requests': [], 'network.outgoing.packets.drop': [], 'disk.ephemeral.size': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:11:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:11:01.468 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be7b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f28411caae0>] with cache [{}], pollster history [{'network.outgoing.bytes.delta': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'network.outgoing.bytes.rate': [], 'disk.device.write.latency': [], 'network.incoming.packets.error': [], 'disk.device.write.requests': [], 'network.outgoing.packets.drop': [], 'disk.ephemeral.size': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:11:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:11:01.466 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f28411a4c50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:11:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:11:01.468 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 12:11:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:11:01.468 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f28410be990>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:11:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:11:01.469 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 12:11:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:11:01.469 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f28410bf1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:11:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:11:01.469 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 12:11:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:11:01.469 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f28410bebd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:11:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:11:01.469 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 12:11:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:11:01.469 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f28410bf410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:11:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:11:01.469 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 12:11:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:11:01.469 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f28410bec30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:11:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:11:01.469 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 12:11:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:11:01.469 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f28410bcfb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:11:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:11:01.469 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 12:11:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:11:01.469 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f28410bc920>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:11:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:11:01.470 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 12:11:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:11:01.470 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f28410bc5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:11:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:11:01.470 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 12:11:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:11:01.470 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f28410bc890>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:11:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:11:01.470 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 12:11:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:11:01.470 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f28410be720>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:11:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:11:01.470 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 12:11:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:11:01.470 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f28410bc6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:11:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:11:01.470 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 12:11:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:11:01.470 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f28410bec90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:11:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:11:01.470 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 12:11:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:11:01.470 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f284322b260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:11:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:11:01.471 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 12:11:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:11:01.471 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f28410bc740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:11:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:11:01.471 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 12:11:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:11:01.471 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f28410be780>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:11:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:11:01.471 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 12:11:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:11:01.471 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:11:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:11:01.471 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:11:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:11:01.471 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:11:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:11:01.472 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:11:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:11:01.472 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:11:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:11:01.472 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:11:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:11:01.472 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:11:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:11:01.472 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:11:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:11:01.472 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:11:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:11:01.472 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:11:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:11:01.472 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:11:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:11:01.472 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:11:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:11:01.472 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:11:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:11:01.473 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:11:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:11:01.473 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:11:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:11:01.473 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:11:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:11:01.473 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:11:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:11:01.473 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:11:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:11:01.473 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:11:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:11:01.473 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:11:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:11:01.473 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:11:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:11:01.474 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:11:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:11:01.474 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:11:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:11:01.474 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:11:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:11:01.474 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:11:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:11:01.474 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:11:01 compute-0 podman[248542]: 2026-01-23 12:11:01.757194291 +0000 UTC m=+0.090506978 container health_status 1cc877fed4914980324cf4c0d6ba23743fd113442cee4d49cc1a59e402757170 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 23 12:11:02 compute-0 nova_compute[185173]: 2026-01-23 12:11:02.976 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:11:04 compute-0 nova_compute[185173]: 2026-01-23 12:11:04.219 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:11:05 compute-0 podman[248569]: 2026-01-23 12:11:05.766147476 +0000 UTC m=+0.085032221 container health_status 900ef841977ab427bb05b895d10e0cac749b9185cccc7bb7aaf2b3886aa6449a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., version=9.4, distribution-scope=public, config_id=kepler, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, vcs-type=git, name=ubi9, build-date=2024-09-18T21:23:30, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, maintainer=Red Hat, Inc., io.openshift.tags=base rhel9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Jan 23 12:11:05 compute-0 podman[248570]: 2026-01-23 12:11:05.770050694 +0000 UTC m=+0.082591990 container health_status adf529ba1b6aae11f18bcfacdd7f5850af0b6e6af2250d4a705be9c346f3f5af (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 23 12:11:07 compute-0 nova_compute[185173]: 2026-01-23 12:11:07.980 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:11:09 compute-0 nova_compute[185173]: 2026-01-23 12:11:09.223 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:11:11 compute-0 podman[248604]: 2026-01-23 12:11:11.766489141 +0000 UTC m=+0.085950784 container health_status 99ee297e6e25b500e7af118e58bbafc761d2fd7202cdfcf4c976c2a99866b5ef (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 23 12:11:12 compute-0 nova_compute[185173]: 2026-01-23 12:11:12.983 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:11:14 compute-0 nova_compute[185173]: 2026-01-23 12:11:14.226 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:11:17 compute-0 nova_compute[185173]: 2026-01-23 12:11:17.985 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:11:18 compute-0 podman[248627]: 2026-01-23 12:11:18.725108368 +0000 UTC m=+0.061306346 container health_status cde20f10ae383cce1365a41265bac0a75ea71c31a21a1539f187bef9d678e8d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., release=1755695350, vcs-type=git, distribution-scope=public, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, version=9.6, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, config_id=openstack_network_exporter)
Jan 23 12:11:19 compute-0 nova_compute[185173]: 2026-01-23 12:11:19.228 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:11:22 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:11:22.565 106832 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:21:44', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '86:2e:09:c4:2a:53'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 23 12:11:22 compute-0 nova_compute[185173]: 2026-01-23 12:11:22.566 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:11:22 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:11:22.567 106832 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 23 12:11:22 compute-0 nova_compute[185173]: 2026-01-23 12:11:22.988 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:11:23 compute-0 nova_compute[185173]: 2026-01-23 12:11:23.235 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:11:24 compute-0 nova_compute[185173]: 2026-01-23 12:11:24.229 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:11:25 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:11:25.569 106832 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9a136bfd-345f-428f-a7f6-d55531120214, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 23 12:11:25 compute-0 podman[248649]: 2026-01-23 12:11:25.74020228 +0000 UTC m=+0.068290242 container health_status 6ec039018dddd109dd56b3f3912ce4a80c166b5fb98c417c5e3cfbbdfbfbeaad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.build-date=20260120, tcib_build_tag=93ecf842527b95c82e14fba92451bd07)
Jan 23 12:11:25 compute-0 podman[248648]: 2026-01-23 12:11:25.740798775 +0000 UTC m=+0.073923203 container health_status 48bfd3e93cfb033a8917f154ab637a84f3f60f7609564292c230ce848bae7693 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 23 12:11:25 compute-0 podman[248650]: 2026-01-23 12:11:25.745459652 +0000 UTC m=+0.073474502 container health_status d96827cd9c29e53bbdf4cef10942608e4ba405294733072b4aa624c0238e2ed8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, managed_by=edpm_ansible, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 12:11:26 compute-0 nova_compute[185173]: 2026-01-23 12:11:26.230 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:11:27 compute-0 nova_compute[185173]: 2026-01-23 12:11:27.235 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:11:27 compute-0 nova_compute[185173]: 2026-01-23 12:11:27.278 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 12:11:27 compute-0 nova_compute[185173]: 2026-01-23 12:11:27.278 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 12:11:27 compute-0 nova_compute[185173]: 2026-01-23 12:11:27.278 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 12:11:27 compute-0 nova_compute[185173]: 2026-01-23 12:11:27.278 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 23 12:11:27 compute-0 nova_compute[185173]: 2026-01-23 12:11:27.595 185177 WARNING nova.virt.libvirt.driver [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 23 12:11:27 compute-0 nova_compute[185173]: 2026-01-23 12:11:27.597 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5382MB free_disk=72.41525650024414GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 23 12:11:27 compute-0 nova_compute[185173]: 2026-01-23 12:11:27.597 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 12:11:27 compute-0 nova_compute[185173]: 2026-01-23 12:11:27.598 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 12:11:27 compute-0 nova_compute[185173]: 2026-01-23 12:11:27.693 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 23 12:11:27 compute-0 nova_compute[185173]: 2026-01-23 12:11:27.693 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 23 12:11:27 compute-0 nova_compute[185173]: 2026-01-23 12:11:27.719 185177 DEBUG nova.compute.provider_tree [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Inventory has not changed in ProviderTree for provider: 77dd020c-2f5c-40b0-b660-8a95a28aabbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 23 12:11:27 compute-0 nova_compute[185173]: 2026-01-23 12:11:27.735 185177 DEBUG nova.scheduler.client.report [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Inventory has not changed for provider 77dd020c-2f5c-40b0-b660-8a95a28aabbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 23 12:11:27 compute-0 nova_compute[185173]: 2026-01-23 12:11:27.737 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 23 12:11:27 compute-0 nova_compute[185173]: 2026-01-23 12:11:27.738 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.140s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 12:11:27 compute-0 nova_compute[185173]: 2026-01-23 12:11:27.990 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:11:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:11:29.128 106832 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 12:11:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:11:29.128 106832 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 12:11:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:11:29.128 106832 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 12:11:29 compute-0 nova_compute[185173]: 2026-01-23 12:11:29.232 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:11:29 compute-0 podman[201022]: time="2026-01-23T12:11:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 23 12:11:29 compute-0 podman[201022]: @ - - [23/Jan/2026:12:11:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 23 12:11:29 compute-0 podman[201022]: @ - - [23/Jan/2026:12:11:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3915 "" "Go-http-client/1.1"
Jan 23 12:11:30 compute-0 nova_compute[185173]: 2026-01-23 12:11:30.739 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:11:31 compute-0 nova_compute[185173]: 2026-01-23 12:11:31.235 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:11:31 compute-0 openstack_network_exporter[204160]: ERROR   12:11:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 23 12:11:31 compute-0 openstack_network_exporter[204160]: 
Jan 23 12:11:31 compute-0 openstack_network_exporter[204160]: ERROR   12:11:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 23 12:11:31 compute-0 openstack_network_exporter[204160]: 
Jan 23 12:11:32 compute-0 nova_compute[185173]: 2026-01-23 12:11:32.236 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:11:32 compute-0 podman[248705]: 2026-01-23 12:11:32.761715653 +0000 UTC m=+0.093135585 container health_status 1cc877fed4914980324cf4c0d6ba23743fd113442cee4d49cc1a59e402757170 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Jan 23 12:11:32 compute-0 nova_compute[185173]: 2026-01-23 12:11:32.992 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:11:34 compute-0 nova_compute[185173]: 2026-01-23 12:11:34.235 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:11:34 compute-0 nova_compute[185173]: 2026-01-23 12:11:34.235 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 23 12:11:34 compute-0 nova_compute[185173]: 2026-01-23 12:11:34.235 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 23 12:11:34 compute-0 nova_compute[185173]: 2026-01-23 12:11:34.236 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:11:34 compute-0 nova_compute[185173]: 2026-01-23 12:11:34.791 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 23 12:11:35 compute-0 nova_compute[185173]: 2026-01-23 12:11:35.235 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:11:35 compute-0 nova_compute[185173]: 2026-01-23 12:11:35.235 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 23 12:11:36 compute-0 podman[248731]: 2026-01-23 12:11:36.72764256 +0000 UTC m=+0.062546648 container health_status 900ef841977ab427bb05b895d10e0cac749b9185cccc7bb7aaf2b3886aa6449a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=kepler, container_name=kepler, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, vendor=Red Hat, Inc., io.openshift.expose-services=, managed_by=edpm_ansible, release-0.7.12=, maintainer=Red Hat, Inc., architecture=x86_64, com.redhat.component=ubi9-container, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 23 12:11:36 compute-0 podman[248732]: 2026-01-23 12:11:36.749420106 +0000 UTC m=+0.082486118 container health_status adf529ba1b6aae11f18bcfacdd7f5850af0b6e6af2250d4a705be9c346f3f5af (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 23 12:11:37 compute-0 nova_compute[185173]: 2026-01-23 12:11:37.235 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:11:37 compute-0 nova_compute[185173]: 2026-01-23 12:11:37.994 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:11:39 compute-0 nova_compute[185173]: 2026-01-23 12:11:39.239 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:11:42 compute-0 podman[248770]: 2026-01-23 12:11:42.719967073 +0000 UTC m=+0.054210590 container health_status 99ee297e6e25b500e7af118e58bbafc761d2fd7202cdfcf4c976c2a99866b5ef (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 23 12:11:42 compute-0 nova_compute[185173]: 2026-01-23 12:11:42.997 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:11:44 compute-0 nova_compute[185173]: 2026-01-23 12:11:44.242 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:11:48 compute-0 nova_compute[185173]: 2026-01-23 12:11:48.000 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:11:49 compute-0 nova_compute[185173]: 2026-01-23 12:11:49.246 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:11:49 compute-0 podman[248796]: 2026-01-23 12:11:49.78076426 +0000 UTC m=+0.097730140 container health_status cde20f10ae383cce1365a41265bac0a75ea71c31a21a1539f187bef9d678e8d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, distribution-scope=public, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, config_id=openstack_network_exporter)
Jan 23 12:11:52 compute-0 ovn_controller[97581]: 2026-01-23T12:11:52Z|00065|memory_trim|INFO|Detected inactivity (last active 30027 ms ago): trimming memory
Jan 23 12:11:53 compute-0 nova_compute[185173]: 2026-01-23 12:11:53.002 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:11:54 compute-0 nova_compute[185173]: 2026-01-23 12:11:54.250 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:11:56 compute-0 podman[248818]: 2026-01-23 12:11:56.792455496 +0000 UTC m=+0.102697884 container health_status 48bfd3e93cfb033a8917f154ab637a84f3f60f7609564292c230ce848bae7693 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 23 12:11:56 compute-0 podman[248820]: 2026-01-23 12:11:56.80059482 +0000 UTC m=+0.107653728 container health_status d96827cd9c29e53bbdf4cef10942608e4ba405294733072b4aa624c0238e2ed8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 23 12:11:56 compute-0 podman[248819]: 2026-01-23 12:11:56.801128044 +0000 UTC m=+0.115089445 container health_status 6ec039018dddd109dd56b3f3912ce4a80c166b5fb98c417c5e3cfbbdfbfbeaad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260120, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, tcib_managed=true)
Jan 23 12:11:58 compute-0 nova_compute[185173]: 2026-01-23 12:11:58.007 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:11:58 compute-0 sshd-session[248877]: Invalid user user from 172.86.70.171 port 54790
Jan 23 12:11:58 compute-0 sshd-session[248877]: Connection closed by invalid user user 172.86.70.171 port 54790 [preauth]
Jan 23 12:11:59 compute-0 nova_compute[185173]: 2026-01-23 12:11:59.253 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:11:59 compute-0 podman[201022]: time="2026-01-23T12:11:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 23 12:11:59 compute-0 podman[201022]: @ - - [23/Jan/2026:12:11:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 23 12:11:59 compute-0 podman[201022]: @ - - [23/Jan/2026:12:11:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3914 "" "Go-http-client/1.1"
Jan 23 12:12:01 compute-0 openstack_network_exporter[204160]: ERROR   12:12:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 23 12:12:01 compute-0 openstack_network_exporter[204160]: 
Jan 23 12:12:01 compute-0 openstack_network_exporter[204160]: ERROR   12:12:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 23 12:12:01 compute-0 openstack_network_exporter[204160]: 
Jan 23 12:12:03 compute-0 nova_compute[185173]: 2026-01-23 12:12:03.010 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:12:03 compute-0 podman[248879]: 2026-01-23 12:12:03.870582787 +0000 UTC m=+0.196456073 container health_status 1cc877fed4914980324cf4c0d6ba23743fd113442cee4d49cc1a59e402757170 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 12:12:04 compute-0 nova_compute[185173]: 2026-01-23 12:12:04.256 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:12:04 compute-0 nova_compute[185173]: 2026-01-23 12:12:04.798 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:12:05 compute-0 nova_compute[185173]: 2026-01-23 12:12:05.540 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:12:06 compute-0 nova_compute[185173]: 2026-01-23 12:12:06.974 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:12:07 compute-0 podman[248905]: 2026-01-23 12:12:07.767682449 +0000 UTC m=+0.090052887 container health_status adf529ba1b6aae11f18bcfacdd7f5850af0b6e6af2250d4a705be9c346f3f5af (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi)
Jan 23 12:12:07 compute-0 podman[248904]: 2026-01-23 12:12:07.785161407 +0000 UTC m=+0.103649268 container health_status 900ef841977ab427bb05b895d10e0cac749b9185cccc7bb7aaf2b3886aa6449a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=base rhel9, name=ubi9, release-0.7.12=, vcs-type=git, version=9.4, architecture=x86_64, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., io.buildah.version=1.29.0, vendor=Red Hat, Inc., release=1214.1726694543, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, managed_by=edpm_ansible)
Jan 23 12:12:08 compute-0 nova_compute[185173]: 2026-01-23 12:12:08.012 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:12:09 compute-0 nova_compute[185173]: 2026-01-23 12:12:09.258 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:12:10 compute-0 sshd-session[248940]: Invalid user solana from 45.148.10.240 port 35108
Jan 23 12:12:10 compute-0 nova_compute[185173]: 2026-01-23 12:12:10.354 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:12:10 compute-0 sshd-session[248940]: Connection closed by invalid user solana 45.148.10.240 port 35108 [preauth]
Jan 23 12:12:12 compute-0 nova_compute[185173]: 2026-01-23 12:12:12.704 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:12:13 compute-0 nova_compute[185173]: 2026-01-23 12:12:13.017 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:12:13 compute-0 podman[248943]: 2026-01-23 12:12:13.726790841 +0000 UTC m=+0.063854792 container health_status 99ee297e6e25b500e7af118e58bbafc761d2fd7202cdfcf4c976c2a99866b5ef (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 23 12:12:14 compute-0 nova_compute[185173]: 2026-01-23 12:12:14.261 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:12:14 compute-0 nova_compute[185173]: 2026-01-23 12:12:14.490 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:12:15 compute-0 nova_compute[185173]: 2026-01-23 12:12:15.613 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:12:18 compute-0 nova_compute[185173]: 2026-01-23 12:12:18.021 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:12:19 compute-0 nova_compute[185173]: 2026-01-23 12:12:19.264 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:12:20 compute-0 podman[248968]: 2026-01-23 12:12:20.759989015 +0000 UTC m=+0.077267457 container health_status cde20f10ae383cce1365a41265bac0a75ea71c31a21a1539f187bef9d678e8d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=openstack_network_exporter, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., distribution-scope=public, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, architecture=x86_64)
Jan 23 12:12:23 compute-0 nova_compute[185173]: 2026-01-23 12:12:23.023 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:12:23 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:12:23.930 106832 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:21:44', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '86:2e:09:c4:2a:53'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 23 12:12:23 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:12:23.931 106832 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 23 12:12:23 compute-0 nova_compute[185173]: 2026-01-23 12:12:23.934 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:12:24 compute-0 nova_compute[185173]: 2026-01-23 12:12:24.235 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:12:24 compute-0 nova_compute[185173]: 2026-01-23 12:12:24.266 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:12:24 compute-0 nova_compute[185173]: 2026-01-23 12:12:24.885 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:12:26 compute-0 nova_compute[185173]: 2026-01-23 12:12:26.230 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:12:26 compute-0 nova_compute[185173]: 2026-01-23 12:12:26.442 185177 DEBUG oslo_concurrency.lockutils [None req-59feb4f5-b245-43f1-882d-c02a9a6cedb1 e0e1cef9ff584692b12674d39ab8e57c 219dee4c2af34d05ac6e31aa65c35134 - - default default] Acquiring lock "9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 12:12:26 compute-0 nova_compute[185173]: 2026-01-23 12:12:26.442 185177 DEBUG oslo_concurrency.lockutils [None req-59feb4f5-b245-43f1-882d-c02a9a6cedb1 e0e1cef9ff584692b12674d39ab8e57c 219dee4c2af34d05ac6e31aa65c35134 - - default default] Lock "9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 12:12:26 compute-0 nova_compute[185173]: 2026-01-23 12:12:26.483 185177 DEBUG nova.compute.manager [None req-59feb4f5-b245-43f1-882d-c02a9a6cedb1 e0e1cef9ff584692b12674d39ab8e57c 219dee4c2af34d05ac6e31aa65c35134 - - default default] [instance: 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 23 12:12:26 compute-0 nova_compute[185173]: 2026-01-23 12:12:26.592 185177 DEBUG oslo_concurrency.lockutils [None req-59feb4f5-b245-43f1-882d-c02a9a6cedb1 e0e1cef9ff584692b12674d39ab8e57c 219dee4c2af34d05ac6e31aa65c35134 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 12:12:26 compute-0 nova_compute[185173]: 2026-01-23 12:12:26.593 185177 DEBUG oslo_concurrency.lockutils [None req-59feb4f5-b245-43f1-882d-c02a9a6cedb1 e0e1cef9ff584692b12674d39ab8e57c 219dee4c2af34d05ac6e31aa65c35134 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 12:12:26 compute-0 nova_compute[185173]: 2026-01-23 12:12:26.604 185177 DEBUG nova.virt.hardware [None req-59feb4f5-b245-43f1-882d-c02a9a6cedb1 e0e1cef9ff584692b12674d39ab8e57c 219dee4c2af34d05ac6e31aa65c35134 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 23 12:12:26 compute-0 nova_compute[185173]: 2026-01-23 12:12:26.605 185177 INFO nova.compute.claims [None req-59feb4f5-b245-43f1-882d-c02a9a6cedb1 e0e1cef9ff584692b12674d39ab8e57c 219dee4c2af34d05ac6e31aa65c35134 - - default default] [instance: 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7] Claim successful on node compute-0.ctlplane.example.com
Jan 23 12:12:26 compute-0 nova_compute[185173]: 2026-01-23 12:12:26.747 185177 DEBUG nova.compute.provider_tree [None req-59feb4f5-b245-43f1-882d-c02a9a6cedb1 e0e1cef9ff584692b12674d39ab8e57c 219dee4c2af34d05ac6e31aa65c35134 - - default default] Inventory has not changed in ProviderTree for provider: 77dd020c-2f5c-40b0-b660-8a95a28aabbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 23 12:12:26 compute-0 nova_compute[185173]: 2026-01-23 12:12:26.763 185177 DEBUG nova.scheduler.client.report [None req-59feb4f5-b245-43f1-882d-c02a9a6cedb1 e0e1cef9ff584692b12674d39ab8e57c 219dee4c2af34d05ac6e31aa65c35134 - - default default] Inventory has not changed for provider 77dd020c-2f5c-40b0-b660-8a95a28aabbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 23 12:12:26 compute-0 nova_compute[185173]: 2026-01-23 12:12:26.787 185177 DEBUG oslo_concurrency.lockutils [None req-59feb4f5-b245-43f1-882d-c02a9a6cedb1 e0e1cef9ff584692b12674d39ab8e57c 219dee4c2af34d05ac6e31aa65c35134 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.194s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 12:12:26 compute-0 nova_compute[185173]: 2026-01-23 12:12:26.787 185177 DEBUG nova.compute.manager [None req-59feb4f5-b245-43f1-882d-c02a9a6cedb1 e0e1cef9ff584692b12674d39ab8e57c 219dee4c2af34d05ac6e31aa65c35134 - - default default] [instance: 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 23 12:12:26 compute-0 nova_compute[185173]: 2026-01-23 12:12:26.849 185177 DEBUG nova.compute.manager [None req-59feb4f5-b245-43f1-882d-c02a9a6cedb1 e0e1cef9ff584692b12674d39ab8e57c 219dee4c2af34d05ac6e31aa65c35134 - - default default] [instance: 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 23 12:12:26 compute-0 nova_compute[185173]: 2026-01-23 12:12:26.849 185177 DEBUG nova.network.neutron [None req-59feb4f5-b245-43f1-882d-c02a9a6cedb1 e0e1cef9ff584692b12674d39ab8e57c 219dee4c2af34d05ac6e31aa65c35134 - - default default] [instance: 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 23 12:12:26 compute-0 nova_compute[185173]: 2026-01-23 12:12:26.873 185177 INFO nova.virt.libvirt.driver [None req-59feb4f5-b245-43f1-882d-c02a9a6cedb1 e0e1cef9ff584692b12674d39ab8e57c 219dee4c2af34d05ac6e31aa65c35134 - - default default] [instance: 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 23 12:12:26 compute-0 nova_compute[185173]: 2026-01-23 12:12:26.890 185177 DEBUG nova.compute.manager [None req-59feb4f5-b245-43f1-882d-c02a9a6cedb1 e0e1cef9ff584692b12674d39ab8e57c 219dee4c2af34d05ac6e31aa65c35134 - - default default] [instance: 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 23 12:12:26 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:12:26.933 106832 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9a136bfd-345f-428f-a7f6-d55531120214, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 23 12:12:27 compute-0 nova_compute[185173]: 2026-01-23 12:12:27.001 185177 DEBUG nova.compute.manager [None req-59feb4f5-b245-43f1-882d-c02a9a6cedb1 e0e1cef9ff584692b12674d39ab8e57c 219dee4c2af34d05ac6e31aa65c35134 - - default default] [instance: 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 23 12:12:27 compute-0 nova_compute[185173]: 2026-01-23 12:12:27.002 185177 DEBUG nova.virt.libvirt.driver [None req-59feb4f5-b245-43f1-882d-c02a9a6cedb1 e0e1cef9ff584692b12674d39ab8e57c 219dee4c2af34d05ac6e31aa65c35134 - - default default] [instance: 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 23 12:12:27 compute-0 nova_compute[185173]: 2026-01-23 12:12:27.003 185177 INFO nova.virt.libvirt.driver [None req-59feb4f5-b245-43f1-882d-c02a9a6cedb1 e0e1cef9ff584692b12674d39ab8e57c 219dee4c2af34d05ac6e31aa65c35134 - - default default] [instance: 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7] Creating image(s)
Jan 23 12:12:27 compute-0 nova_compute[185173]: 2026-01-23 12:12:27.003 185177 DEBUG oslo_concurrency.lockutils [None req-59feb4f5-b245-43f1-882d-c02a9a6cedb1 e0e1cef9ff584692b12674d39ab8e57c 219dee4c2af34d05ac6e31aa65c35134 - - default default] Acquiring lock "/var/lib/nova/instances/9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 12:12:27 compute-0 nova_compute[185173]: 2026-01-23 12:12:27.004 185177 DEBUG oslo_concurrency.lockutils [None req-59feb4f5-b245-43f1-882d-c02a9a6cedb1 e0e1cef9ff584692b12674d39ab8e57c 219dee4c2af34d05ac6e31aa65c35134 - - default default] Lock "/var/lib/nova/instances/9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 12:12:27 compute-0 nova_compute[185173]: 2026-01-23 12:12:27.005 185177 DEBUG oslo_concurrency.lockutils [None req-59feb4f5-b245-43f1-882d-c02a9a6cedb1 e0e1cef9ff584692b12674d39ab8e57c 219dee4c2af34d05ac6e31aa65c35134 - - default default] Lock "/var/lib/nova/instances/9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 12:12:27 compute-0 nova_compute[185173]: 2026-01-23 12:12:27.005 185177 DEBUG oslo_concurrency.lockutils [None req-59feb4f5-b245-43f1-882d-c02a9a6cedb1 e0e1cef9ff584692b12674d39ab8e57c 219dee4c2af34d05ac6e31aa65c35134 - - default default] Acquiring lock "79caed8b9a9036a4810e394ed4753d2b091c5fb1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 12:12:27 compute-0 nova_compute[185173]: 2026-01-23 12:12:27.006 185177 DEBUG oslo_concurrency.lockutils [None req-59feb4f5-b245-43f1-882d-c02a9a6cedb1 e0e1cef9ff584692b12674d39ab8e57c 219dee4c2af34d05ac6e31aa65c35134 - - default default] Lock "79caed8b9a9036a4810e394ed4753d2b091c5fb1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 12:12:27 compute-0 podman[248991]: 2026-01-23 12:12:27.749785853 +0000 UTC m=+0.075683368 container health_status 6ec039018dddd109dd56b3f3912ce4a80c166b5fb98c417c5e3cfbbdfbfbeaad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20260120, org.label-schema.license=GPLv2, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Jan 23 12:12:27 compute-0 podman[248990]: 2026-01-23 12:12:27.750120231 +0000 UTC m=+0.081228486 container health_status 48bfd3e93cfb033a8917f154ab637a84f3f60f7609564292c230ce848bae7693 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Jan 23 12:12:27 compute-0 podman[248992]: 2026-01-23 12:12:27.800895213 +0000 UTC m=+0.106248543 container health_status d96827cd9c29e53bbdf4cef10942608e4ba405294733072b4aa624c0238e2ed8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 23 12:12:28 compute-0 nova_compute[185173]: 2026-01-23 12:12:28.027 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:12:28 compute-0 nova_compute[185173]: 2026-01-23 12:12:28.095 185177 DEBUG nova.policy [None req-59feb4f5-b245-43f1-882d-c02a9a6cedb1 e0e1cef9ff584692b12674d39ab8e57c 219dee4c2af34d05ac6e31aa65c35134 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e0e1cef9ff584692b12674d39ab8e57c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '219dee4c2af34d05ac6e31aa65c35134', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 23 12:12:28 compute-0 nova_compute[185173]: 2026-01-23 12:12:28.826 185177 DEBUG oslo_concurrency.lockutils [None req-fa14b7a0-c326-43e5-a6cb-d5f064ffa8f3 1eab809d0fb54c0aad115c1f8dbb943b cb61879fc5554da59f69b8ca9516ae29 - - default default] Acquiring lock "c471a51f-aa4e-4533-a6fa-9a4716ed23ec" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 12:12:28 compute-0 nova_compute[185173]: 2026-01-23 12:12:28.826 185177 DEBUG oslo_concurrency.lockutils [None req-fa14b7a0-c326-43e5-a6cb-d5f064ffa8f3 1eab809d0fb54c0aad115c1f8dbb943b cb61879fc5554da59f69b8ca9516ae29 - - default default] Lock "c471a51f-aa4e-4533-a6fa-9a4716ed23ec" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 12:12:28 compute-0 nova_compute[185173]: 2026-01-23 12:12:28.841 185177 DEBUG nova.compute.manager [None req-fa14b7a0-c326-43e5-a6cb-d5f064ffa8f3 1eab809d0fb54c0aad115c1f8dbb943b cb61879fc5554da59f69b8ca9516ae29 - - default default] [instance: c471a51f-aa4e-4533-a6fa-9a4716ed23ec] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 23 12:12:28 compute-0 nova_compute[185173]: 2026-01-23 12:12:28.901 185177 DEBUG oslo_concurrency.lockutils [None req-fa14b7a0-c326-43e5-a6cb-d5f064ffa8f3 1eab809d0fb54c0aad115c1f8dbb943b cb61879fc5554da59f69b8ca9516ae29 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 12:12:28 compute-0 nova_compute[185173]: 2026-01-23 12:12:28.902 185177 DEBUG oslo_concurrency.lockutils [None req-fa14b7a0-c326-43e5-a6cb-d5f064ffa8f3 1eab809d0fb54c0aad115c1f8dbb943b cb61879fc5554da59f69b8ca9516ae29 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 12:12:28 compute-0 nova_compute[185173]: 2026-01-23 12:12:28.908 185177 DEBUG nova.virt.hardware [None req-fa14b7a0-c326-43e5-a6cb-d5f064ffa8f3 1eab809d0fb54c0aad115c1f8dbb943b cb61879fc5554da59f69b8ca9516ae29 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 23 12:12:28 compute-0 nova_compute[185173]: 2026-01-23 12:12:28.908 185177 INFO nova.compute.claims [None req-fa14b7a0-c326-43e5-a6cb-d5f064ffa8f3 1eab809d0fb54c0aad115c1f8dbb943b cb61879fc5554da59f69b8ca9516ae29 - - default default] [instance: c471a51f-aa4e-4533-a6fa-9a4716ed23ec] Claim successful on node compute-0.ctlplane.example.com
Jan 23 12:12:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:12:29.129 106832 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 12:12:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:12:29.129 106832 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 12:12:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:12:29.129 106832 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 12:12:29 compute-0 nova_compute[185173]: 2026-01-23 12:12:29.234 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:12:29 compute-0 nova_compute[185173]: 2026-01-23 12:12:29.265 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 12:12:29 compute-0 nova_compute[185173]: 2026-01-23 12:12:29.268 185177 DEBUG nova.scheduler.client.report [None req-fa14b7a0-c326-43e5-a6cb-d5f064ffa8f3 1eab809d0fb54c0aad115c1f8dbb943b cb61879fc5554da59f69b8ca9516ae29 - - default default] Refreshing inventories for resource provider 77dd020c-2f5c-40b0-b660-8a95a28aabbd _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 23 12:12:29 compute-0 nova_compute[185173]: 2026-01-23 12:12:29.271 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:12:29 compute-0 nova_compute[185173]: 2026-01-23 12:12:29.299 185177 DEBUG nova.scheduler.client.report [None req-fa14b7a0-c326-43e5-a6cb-d5f064ffa8f3 1eab809d0fb54c0aad115c1f8dbb943b cb61879fc5554da59f69b8ca9516ae29 - - default default] Updating ProviderTree inventory for provider 77dd020c-2f5c-40b0-b660-8a95a28aabbd from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 23 12:12:29 compute-0 nova_compute[185173]: 2026-01-23 12:12:29.299 185177 DEBUG nova.compute.provider_tree [None req-fa14b7a0-c326-43e5-a6cb-d5f064ffa8f3 1eab809d0fb54c0aad115c1f8dbb943b cb61879fc5554da59f69b8ca9516ae29 - - default default] Updating inventory in ProviderTree for provider 77dd020c-2f5c-40b0-b660-8a95a28aabbd with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 23 12:12:29 compute-0 nova_compute[185173]: 2026-01-23 12:12:29.317 185177 DEBUG nova.scheduler.client.report [None req-fa14b7a0-c326-43e5-a6cb-d5f064ffa8f3 1eab809d0fb54c0aad115c1f8dbb943b cb61879fc5554da59f69b8ca9516ae29 - - default default] Refreshing aggregate associations for resource provider 77dd020c-2f5c-40b0-b660-8a95a28aabbd, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 23 12:12:29 compute-0 nova_compute[185173]: 2026-01-23 12:12:29.353 185177 DEBUG nova.scheduler.client.report [None req-fa14b7a0-c326-43e5-a6cb-d5f064ffa8f3 1eab809d0fb54c0aad115c1f8dbb943b cb61879fc5554da59f69b8ca9516ae29 - - default default] Refreshing trait associations for resource provider 77dd020c-2f5c-40b0-b660-8a95a28aabbd, traits: HW_CPU_X86_F16C,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_IDE,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_TRUSTED_CERTS,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_BMI,HW_CPU_X86_FMA3,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE42,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_ABM,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_AVX2,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_AVX,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SSE,HW_CPU_X86_BMI2,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE4A,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_MMX,HW_CPU_X86_SSE41,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_USB _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 23 12:12:29 compute-0 nova_compute[185173]: 2026-01-23 12:12:29.430 185177 DEBUG nova.compute.provider_tree [None req-fa14b7a0-c326-43e5-a6cb-d5f064ffa8f3 1eab809d0fb54c0aad115c1f8dbb943b cb61879fc5554da59f69b8ca9516ae29 - - default default] Inventory has not changed in ProviderTree for provider: 77dd020c-2f5c-40b0-b660-8a95a28aabbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 23 12:12:29 compute-0 nova_compute[185173]: 2026-01-23 12:12:29.446 185177 DEBUG nova.scheduler.client.report [None req-fa14b7a0-c326-43e5-a6cb-d5f064ffa8f3 1eab809d0fb54c0aad115c1f8dbb943b cb61879fc5554da59f69b8ca9516ae29 - - default default] Inventory has not changed for provider 77dd020c-2f5c-40b0-b660-8a95a28aabbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 23 12:12:29 compute-0 nova_compute[185173]: 2026-01-23 12:12:29.471 185177 DEBUG oslo_concurrency.lockutils [None req-fa14b7a0-c326-43e5-a6cb-d5f064ffa8f3 1eab809d0fb54c0aad115c1f8dbb943b cb61879fc5554da59f69b8ca9516ae29 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.570s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 12:12:29 compute-0 nova_compute[185173]: 2026-01-23 12:12:29.472 185177 DEBUG nova.compute.manager [None req-fa14b7a0-c326-43e5-a6cb-d5f064ffa8f3 1eab809d0fb54c0aad115c1f8dbb943b cb61879fc5554da59f69b8ca9516ae29 - - default default] [instance: c471a51f-aa4e-4533-a6fa-9a4716ed23ec] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 23 12:12:29 compute-0 nova_compute[185173]: 2026-01-23 12:12:29.475 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.209s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 12:12:29 compute-0 nova_compute[185173]: 2026-01-23 12:12:29.475 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 12:12:29 compute-0 nova_compute[185173]: 2026-01-23 12:12:29.475 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 23 12:12:29 compute-0 nova_compute[185173]: 2026-01-23 12:12:29.528 185177 DEBUG nova.compute.manager [None req-fa14b7a0-c326-43e5-a6cb-d5f064ffa8f3 1eab809d0fb54c0aad115c1f8dbb943b cb61879fc5554da59f69b8ca9516ae29 - - default default] [instance: c471a51f-aa4e-4533-a6fa-9a4716ed23ec] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 23 12:12:29 compute-0 nova_compute[185173]: 2026-01-23 12:12:29.529 185177 DEBUG nova.network.neutron [None req-fa14b7a0-c326-43e5-a6cb-d5f064ffa8f3 1eab809d0fb54c0aad115c1f8dbb943b cb61879fc5554da59f69b8ca9516ae29 - - default default] [instance: c471a51f-aa4e-4533-a6fa-9a4716ed23ec] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 23 12:12:29 compute-0 nova_compute[185173]: 2026-01-23 12:12:29.553 185177 INFO nova.virt.libvirt.driver [None req-fa14b7a0-c326-43e5-a6cb-d5f064ffa8f3 1eab809d0fb54c0aad115c1f8dbb943b cb61879fc5554da59f69b8ca9516ae29 - - default default] [instance: c471a51f-aa4e-4533-a6fa-9a4716ed23ec] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 23 12:12:29 compute-0 nova_compute[185173]: 2026-01-23 12:12:29.573 185177 DEBUG nova.compute.manager [None req-fa14b7a0-c326-43e5-a6cb-d5f064ffa8f3 1eab809d0fb54c0aad115c1f8dbb943b cb61879fc5554da59f69b8ca9516ae29 - - default default] [instance: c471a51f-aa4e-4533-a6fa-9a4716ed23ec] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 23 12:12:29 compute-0 nova_compute[185173]: 2026-01-23 12:12:29.683 185177 DEBUG nova.compute.manager [None req-fa14b7a0-c326-43e5-a6cb-d5f064ffa8f3 1eab809d0fb54c0aad115c1f8dbb943b cb61879fc5554da59f69b8ca9516ae29 - - default default] [instance: c471a51f-aa4e-4533-a6fa-9a4716ed23ec] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 23 12:12:29 compute-0 nova_compute[185173]: 2026-01-23 12:12:29.686 185177 DEBUG nova.virt.libvirt.driver [None req-fa14b7a0-c326-43e5-a6cb-d5f064ffa8f3 1eab809d0fb54c0aad115c1f8dbb943b cb61879fc5554da59f69b8ca9516ae29 - - default default] [instance: c471a51f-aa4e-4533-a6fa-9a4716ed23ec] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 23 12:12:29 compute-0 nova_compute[185173]: 2026-01-23 12:12:29.687 185177 INFO nova.virt.libvirt.driver [None req-fa14b7a0-c326-43e5-a6cb-d5f064ffa8f3 1eab809d0fb54c0aad115c1f8dbb943b cb61879fc5554da59f69b8ca9516ae29 - - default default] [instance: c471a51f-aa4e-4533-a6fa-9a4716ed23ec] Creating image(s)
Jan 23 12:12:29 compute-0 nova_compute[185173]: 2026-01-23 12:12:29.687 185177 DEBUG oslo_concurrency.lockutils [None req-fa14b7a0-c326-43e5-a6cb-d5f064ffa8f3 1eab809d0fb54c0aad115c1f8dbb943b cb61879fc5554da59f69b8ca9516ae29 - - default default] Acquiring lock "/var/lib/nova/instances/c471a51f-aa4e-4533-a6fa-9a4716ed23ec/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 12:12:29 compute-0 nova_compute[185173]: 2026-01-23 12:12:29.688 185177 DEBUG oslo_concurrency.lockutils [None req-fa14b7a0-c326-43e5-a6cb-d5f064ffa8f3 1eab809d0fb54c0aad115c1f8dbb943b cb61879fc5554da59f69b8ca9516ae29 - - default default] Lock "/var/lib/nova/instances/c471a51f-aa4e-4533-a6fa-9a4716ed23ec/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 12:12:29 compute-0 nova_compute[185173]: 2026-01-23 12:12:29.688 185177 DEBUG oslo_concurrency.lockutils [None req-fa14b7a0-c326-43e5-a6cb-d5f064ffa8f3 1eab809d0fb54c0aad115c1f8dbb943b cb61879fc5554da59f69b8ca9516ae29 - - default default] Lock "/var/lib/nova/instances/c471a51f-aa4e-4533-a6fa-9a4716ed23ec/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 12:12:29 compute-0 nova_compute[185173]: 2026-01-23 12:12:29.689 185177 DEBUG oslo_concurrency.lockutils [None req-fa14b7a0-c326-43e5-a6cb-d5f064ffa8f3 1eab809d0fb54c0aad115c1f8dbb943b cb61879fc5554da59f69b8ca9516ae29 - - default default] Acquiring lock "79caed8b9a9036a4810e394ed4753d2b091c5fb1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 12:12:29 compute-0 podman[201022]: time="2026-01-23T12:12:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 23 12:12:29 compute-0 podman[201022]: @ - - [23/Jan/2026:12:12:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 23 12:12:29 compute-0 podman[201022]: @ - - [23/Jan/2026:12:12:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3915 "" "Go-http-client/1.1"
Jan 23 12:12:29 compute-0 nova_compute[185173]: 2026-01-23 12:12:29.991 185177 WARNING nova.virt.libvirt.driver [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 23 12:12:29 compute-0 nova_compute[185173]: 2026-01-23 12:12:29.992 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5395MB free_disk=72.41516494750977GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 23 12:12:29 compute-0 nova_compute[185173]: 2026-01-23 12:12:29.992 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 12:12:29 compute-0 nova_compute[185173]: 2026-01-23 12:12:29.993 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 12:12:30 compute-0 nova_compute[185173]: 2026-01-23 12:12:30.158 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Instance 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 23 12:12:30 compute-0 nova_compute[185173]: 2026-01-23 12:12:30.158 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Instance c471a51f-aa4e-4533-a6fa-9a4716ed23ec actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 23 12:12:30 compute-0 nova_compute[185173]: 2026-01-23 12:12:30.159 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 23 12:12:30 compute-0 nova_compute[185173]: 2026-01-23 12:12:30.159 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 23 12:12:30 compute-0 nova_compute[185173]: 2026-01-23 12:12:30.244 185177 DEBUG nova.compute.provider_tree [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Inventory has not changed in ProviderTree for provider: 77dd020c-2f5c-40b0-b660-8a95a28aabbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 23 12:12:30 compute-0 nova_compute[185173]: 2026-01-23 12:12:30.289 185177 DEBUG nova.policy [None req-fa14b7a0-c326-43e5-a6cb-d5f064ffa8f3 1eab809d0fb54c0aad115c1f8dbb943b cb61879fc5554da59f69b8ca9516ae29 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '1eab809d0fb54c0aad115c1f8dbb943b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'cb61879fc5554da59f69b8ca9516ae29', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 23 12:12:30 compute-0 nova_compute[185173]: 2026-01-23 12:12:30.320 185177 DEBUG nova.scheduler.client.report [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Inventory has not changed for provider 77dd020c-2f5c-40b0-b660-8a95a28aabbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 23 12:12:30 compute-0 nova_compute[185173]: 2026-01-23 12:12:30.390 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 23 12:12:30 compute-0 nova_compute[185173]: 2026-01-23 12:12:30.391 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.398s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 12:12:30 compute-0 nova_compute[185173]: 2026-01-23 12:12:30.834 185177 DEBUG nova.network.neutron [None req-59feb4f5-b245-43f1-882d-c02a9a6cedb1 e0e1cef9ff584692b12674d39ab8e57c 219dee4c2af34d05ac6e31aa65c35134 - - default default] [instance: 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7] Successfully created port: d9faf41e-a824-421e-81f1-bbae06da88f5 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 23 12:12:31 compute-0 openstack_network_exporter[204160]: ERROR   12:12:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 23 12:12:31 compute-0 openstack_network_exporter[204160]: 
Jan 23 12:12:31 compute-0 openstack_network_exporter[204160]: ERROR   12:12:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 23 12:12:31 compute-0 openstack_network_exporter[204160]: 
Jan 23 12:12:31 compute-0 nova_compute[185173]: 2026-01-23 12:12:31.561 185177 DEBUG oslo_concurrency.processutils [None req-59feb4f5-b245-43f1-882d-c02a9a6cedb1 e0e1cef9ff584692b12674d39ab8e57c 219dee4c2af34d05ac6e31aa65c35134 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/79caed8b9a9036a4810e394ed4753d2b091c5fb1.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 12:12:31 compute-0 nova_compute[185173]: 2026-01-23 12:12:31.642 185177 DEBUG oslo_concurrency.processutils [None req-59feb4f5-b245-43f1-882d-c02a9a6cedb1 e0e1cef9ff584692b12674d39ab8e57c 219dee4c2af34d05ac6e31aa65c35134 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/79caed8b9a9036a4810e394ed4753d2b091c5fb1.part --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 12:12:31 compute-0 nova_compute[185173]: 2026-01-23 12:12:31.643 185177 DEBUG nova.virt.images [None req-59feb4f5-b245-43f1-882d-c02a9a6cedb1 e0e1cef9ff584692b12674d39ab8e57c 219dee4c2af34d05ac6e31aa65c35134 - - default default] 701e8d50-6f04-4dc4-b857-9ce72ee86552 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Jan 23 12:12:31 compute-0 nova_compute[185173]: 2026-01-23 12:12:31.644 185177 DEBUG nova.privsep.utils [None req-59feb4f5-b245-43f1-882d-c02a9a6cedb1 e0e1cef9ff584692b12674d39ab8e57c 219dee4c2af34d05ac6e31aa65c35134 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Jan 23 12:12:31 compute-0 nova_compute[185173]: 2026-01-23 12:12:31.645 185177 DEBUG oslo_concurrency.processutils [None req-59feb4f5-b245-43f1-882d-c02a9a6cedb1 e0e1cef9ff584692b12674d39ab8e57c 219dee4c2af34d05ac6e31aa65c35134 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/79caed8b9a9036a4810e394ed4753d2b091c5fb1.part /var/lib/nova/instances/_base/79caed8b9a9036a4810e394ed4753d2b091c5fb1.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 12:12:31 compute-0 nova_compute[185173]: 2026-01-23 12:12:31.859 185177 DEBUG oslo_concurrency.processutils [None req-59feb4f5-b245-43f1-882d-c02a9a6cedb1 e0e1cef9ff584692b12674d39ab8e57c 219dee4c2af34d05ac6e31aa65c35134 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/79caed8b9a9036a4810e394ed4753d2b091c5fb1.part /var/lib/nova/instances/_base/79caed8b9a9036a4810e394ed4753d2b091c5fb1.converted" returned: 0 in 0.214s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 12:12:31 compute-0 nova_compute[185173]: 2026-01-23 12:12:31.865 185177 DEBUG oslo_concurrency.processutils [None req-59feb4f5-b245-43f1-882d-c02a9a6cedb1 e0e1cef9ff584692b12674d39ab8e57c 219dee4c2af34d05ac6e31aa65c35134 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/79caed8b9a9036a4810e394ed4753d2b091c5fb1.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 12:12:31 compute-0 nova_compute[185173]: 2026-01-23 12:12:31.928 185177 DEBUG oslo_concurrency.processutils [None req-59feb4f5-b245-43f1-882d-c02a9a6cedb1 e0e1cef9ff584692b12674d39ab8e57c 219dee4c2af34d05ac6e31aa65c35134 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/79caed8b9a9036a4810e394ed4753d2b091c5fb1.converted --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 12:12:31 compute-0 nova_compute[185173]: 2026-01-23 12:12:31.930 185177 DEBUG oslo_concurrency.lockutils [None req-59feb4f5-b245-43f1-882d-c02a9a6cedb1 e0e1cef9ff584692b12674d39ab8e57c 219dee4c2af34d05ac6e31aa65c35134 - - default default] Lock "79caed8b9a9036a4810e394ed4753d2b091c5fb1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 4.925s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 12:12:31 compute-0 nova_compute[185173]: 2026-01-23 12:12:31.943 185177 DEBUG oslo_concurrency.lockutils [None req-fa14b7a0-c326-43e5-a6cb-d5f064ffa8f3 1eab809d0fb54c0aad115c1f8dbb943b cb61879fc5554da59f69b8ca9516ae29 - - default default] Lock "79caed8b9a9036a4810e394ed4753d2b091c5fb1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 2.254s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 12:12:31 compute-0 nova_compute[185173]: 2026-01-23 12:12:31.945 185177 DEBUG oslo_concurrency.lockutils [None req-fa14b7a0-c326-43e5-a6cb-d5f064ffa8f3 1eab809d0fb54c0aad115c1f8dbb943b cb61879fc5554da59f69b8ca9516ae29 - - default default] Lock "79caed8b9a9036a4810e394ed4753d2b091c5fb1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 12:12:31 compute-0 nova_compute[185173]: 2026-01-23 12:12:31.961 185177 DEBUG oslo_concurrency.processutils [None req-59feb4f5-b245-43f1-882d-c02a9a6cedb1 e0e1cef9ff584692b12674d39ab8e57c 219dee4c2af34d05ac6e31aa65c35134 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/79caed8b9a9036a4810e394ed4753d2b091c5fb1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 12:12:31 compute-0 nova_compute[185173]: 2026-01-23 12:12:31.978 185177 DEBUG oslo_concurrency.processutils [None req-fa14b7a0-c326-43e5-a6cb-d5f064ffa8f3 1eab809d0fb54c0aad115c1f8dbb943b cb61879fc5554da59f69b8ca9516ae29 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/79caed8b9a9036a4810e394ed4753d2b091c5fb1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 12:12:32 compute-0 nova_compute[185173]: 2026-01-23 12:12:32.018 185177 DEBUG oslo_concurrency.processutils [None req-59feb4f5-b245-43f1-882d-c02a9a6cedb1 e0e1cef9ff584692b12674d39ab8e57c 219dee4c2af34d05ac6e31aa65c35134 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/79caed8b9a9036a4810e394ed4753d2b091c5fb1 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 12:12:32 compute-0 nova_compute[185173]: 2026-01-23 12:12:32.020 185177 DEBUG oslo_concurrency.lockutils [None req-59feb4f5-b245-43f1-882d-c02a9a6cedb1 e0e1cef9ff584692b12674d39ab8e57c 219dee4c2af34d05ac6e31aa65c35134 - - default default] Acquiring lock "79caed8b9a9036a4810e394ed4753d2b091c5fb1" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 12:12:32 compute-0 nova_compute[185173]: 2026-01-23 12:12:32.021 185177 DEBUG oslo_concurrency.lockutils [None req-59feb4f5-b245-43f1-882d-c02a9a6cedb1 e0e1cef9ff584692b12674d39ab8e57c 219dee4c2af34d05ac6e31aa65c35134 - - default default] Lock "79caed8b9a9036a4810e394ed4753d2b091c5fb1" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 12:12:32 compute-0 nova_compute[185173]: 2026-01-23 12:12:32.033 185177 DEBUG oslo_concurrency.processutils [None req-59feb4f5-b245-43f1-882d-c02a9a6cedb1 e0e1cef9ff584692b12674d39ab8e57c 219dee4c2af34d05ac6e31aa65c35134 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/79caed8b9a9036a4810e394ed4753d2b091c5fb1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 12:12:32 compute-0 nova_compute[185173]: 2026-01-23 12:12:32.049 185177 DEBUG oslo_concurrency.processutils [None req-fa14b7a0-c326-43e5-a6cb-d5f064ffa8f3 1eab809d0fb54c0aad115c1f8dbb943b cb61879fc5554da59f69b8ca9516ae29 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/79caed8b9a9036a4810e394ed4753d2b091c5fb1 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 12:12:32 compute-0 nova_compute[185173]: 2026-01-23 12:12:32.050 185177 DEBUG oslo_concurrency.lockutils [None req-fa14b7a0-c326-43e5-a6cb-d5f064ffa8f3 1eab809d0fb54c0aad115c1f8dbb943b cb61879fc5554da59f69b8ca9516ae29 - - default default] Acquiring lock "79caed8b9a9036a4810e394ed4753d2b091c5fb1" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 12:12:32 compute-0 nova_compute[185173]: 2026-01-23 12:12:32.090 185177 DEBUG oslo_concurrency.processutils [None req-59feb4f5-b245-43f1-882d-c02a9a6cedb1 e0e1cef9ff584692b12674d39ab8e57c 219dee4c2af34d05ac6e31aa65c35134 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/79caed8b9a9036a4810e394ed4753d2b091c5fb1 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 12:12:32 compute-0 nova_compute[185173]: 2026-01-23 12:12:32.091 185177 DEBUG oslo_concurrency.processutils [None req-59feb4f5-b245-43f1-882d-c02a9a6cedb1 e0e1cef9ff584692b12674d39ab8e57c 219dee4c2af34d05ac6e31aa65c35134 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/79caed8b9a9036a4810e394ed4753d2b091c5fb1,backing_fmt=raw /var/lib/nova/instances/9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 12:12:32 compute-0 nova_compute[185173]: 2026-01-23 12:12:32.191 185177 DEBUG oslo_concurrency.processutils [None req-59feb4f5-b245-43f1-882d-c02a9a6cedb1 e0e1cef9ff584692b12674d39ab8e57c 219dee4c2af34d05ac6e31aa65c35134 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/79caed8b9a9036a4810e394ed4753d2b091c5fb1,backing_fmt=raw /var/lib/nova/instances/9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7/disk 1073741824" returned: 0 in 0.100s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 12:12:32 compute-0 nova_compute[185173]: 2026-01-23 12:12:32.192 185177 DEBUG oslo_concurrency.lockutils [None req-59feb4f5-b245-43f1-882d-c02a9a6cedb1 e0e1cef9ff584692b12674d39ab8e57c 219dee4c2af34d05ac6e31aa65c35134 - - default default] Lock "79caed8b9a9036a4810e394ed4753d2b091c5fb1" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.172s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 12:12:32 compute-0 nova_compute[185173]: 2026-01-23 12:12:32.193 185177 DEBUG oslo_concurrency.processutils [None req-59feb4f5-b245-43f1-882d-c02a9a6cedb1 e0e1cef9ff584692b12674d39ab8e57c 219dee4c2af34d05ac6e31aa65c35134 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/79caed8b9a9036a4810e394ed4753d2b091c5fb1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 12:12:32 compute-0 nova_compute[185173]: 2026-01-23 12:12:32.210 185177 DEBUG oslo_concurrency.lockutils [None req-fa14b7a0-c326-43e5-a6cb-d5f064ffa8f3 1eab809d0fb54c0aad115c1f8dbb943b cb61879fc5554da59f69b8ca9516ae29 - - default default] Lock "79caed8b9a9036a4810e394ed4753d2b091c5fb1" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.160s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 12:12:32 compute-0 nova_compute[185173]: 2026-01-23 12:12:32.222 185177 DEBUG oslo_concurrency.processutils [None req-fa14b7a0-c326-43e5-a6cb-d5f064ffa8f3 1eab809d0fb54c0aad115c1f8dbb943b cb61879fc5554da59f69b8ca9516ae29 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/79caed8b9a9036a4810e394ed4753d2b091c5fb1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 12:12:32 compute-0 nova_compute[185173]: 2026-01-23 12:12:32.275 185177 DEBUG oslo_concurrency.processutils [None req-59feb4f5-b245-43f1-882d-c02a9a6cedb1 e0e1cef9ff584692b12674d39ab8e57c 219dee4c2af34d05ac6e31aa65c35134 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/79caed8b9a9036a4810e394ed4753d2b091c5fb1 --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 12:12:32 compute-0 nova_compute[185173]: 2026-01-23 12:12:32.276 185177 DEBUG nova.virt.disk.api [None req-59feb4f5-b245-43f1-882d-c02a9a6cedb1 e0e1cef9ff584692b12674d39ab8e57c 219dee4c2af34d05ac6e31aa65c35134 - - default default] Checking if we can resize image /var/lib/nova/instances/9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Jan 23 12:12:32 compute-0 nova_compute[185173]: 2026-01-23 12:12:32.277 185177 DEBUG oslo_concurrency.processutils [None req-59feb4f5-b245-43f1-882d-c02a9a6cedb1 e0e1cef9ff584692b12674d39ab8e57c 219dee4c2af34d05ac6e31aa65c35134 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 12:12:32 compute-0 nova_compute[185173]: 2026-01-23 12:12:32.291 185177 DEBUG oslo_concurrency.processutils [None req-fa14b7a0-c326-43e5-a6cb-d5f064ffa8f3 1eab809d0fb54c0aad115c1f8dbb943b cb61879fc5554da59f69b8ca9516ae29 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/79caed8b9a9036a4810e394ed4753d2b091c5fb1 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 12:12:32 compute-0 nova_compute[185173]: 2026-01-23 12:12:32.292 185177 DEBUG oslo_concurrency.processutils [None req-fa14b7a0-c326-43e5-a6cb-d5f064ffa8f3 1eab809d0fb54c0aad115c1f8dbb943b cb61879fc5554da59f69b8ca9516ae29 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/79caed8b9a9036a4810e394ed4753d2b091c5fb1,backing_fmt=raw /var/lib/nova/instances/c471a51f-aa4e-4533-a6fa-9a4716ed23ec/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 12:12:32 compute-0 nova_compute[185173]: 2026-01-23 12:12:32.329 185177 DEBUG oslo_concurrency.processutils [None req-fa14b7a0-c326-43e5-a6cb-d5f064ffa8f3 1eab809d0fb54c0aad115c1f8dbb943b cb61879fc5554da59f69b8ca9516ae29 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/79caed8b9a9036a4810e394ed4753d2b091c5fb1,backing_fmt=raw /var/lib/nova/instances/c471a51f-aa4e-4533-a6fa-9a4716ed23ec/disk 1073741824" returned: 0 in 0.037s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 12:12:32 compute-0 nova_compute[185173]: 2026-01-23 12:12:32.331 185177 DEBUG oslo_concurrency.lockutils [None req-fa14b7a0-c326-43e5-a6cb-d5f064ffa8f3 1eab809d0fb54c0aad115c1f8dbb943b cb61879fc5554da59f69b8ca9516ae29 - - default default] Lock "79caed8b9a9036a4810e394ed4753d2b091c5fb1" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.121s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 12:12:32 compute-0 nova_compute[185173]: 2026-01-23 12:12:32.331 185177 DEBUG oslo_concurrency.processutils [None req-fa14b7a0-c326-43e5-a6cb-d5f064ffa8f3 1eab809d0fb54c0aad115c1f8dbb943b cb61879fc5554da59f69b8ca9516ae29 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/79caed8b9a9036a4810e394ed4753d2b091c5fb1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 12:12:32 compute-0 nova_compute[185173]: 2026-01-23 12:12:32.346 185177 DEBUG oslo_concurrency.processutils [None req-59feb4f5-b245-43f1-882d-c02a9a6cedb1 e0e1cef9ff584692b12674d39ab8e57c 219dee4c2af34d05ac6e31aa65c35134 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7/disk --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 12:12:32 compute-0 nova_compute[185173]: 2026-01-23 12:12:32.347 185177 DEBUG nova.virt.disk.api [None req-59feb4f5-b245-43f1-882d-c02a9a6cedb1 e0e1cef9ff584692b12674d39ab8e57c 219dee4c2af34d05ac6e31aa65c35134 - - default default] Cannot resize image /var/lib/nova/instances/9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Jan 23 12:12:32 compute-0 nova_compute[185173]: 2026-01-23 12:12:32.348 185177 DEBUG nova.objects.instance [None req-59feb4f5-b245-43f1-882d-c02a9a6cedb1 e0e1cef9ff584692b12674d39ab8e57c 219dee4c2af34d05ac6e31aa65c35134 - - default default] Lazy-loading 'migration_context' on Instance uuid 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 23 12:12:32 compute-0 nova_compute[185173]: 2026-01-23 12:12:32.387 185177 DEBUG oslo_concurrency.processutils [None req-fa14b7a0-c326-43e5-a6cb-d5f064ffa8f3 1eab809d0fb54c0aad115c1f8dbb943b cb61879fc5554da59f69b8ca9516ae29 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/79caed8b9a9036a4810e394ed4753d2b091c5fb1 --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 12:12:32 compute-0 nova_compute[185173]: 2026-01-23 12:12:32.388 185177 DEBUG nova.virt.disk.api [None req-fa14b7a0-c326-43e5-a6cb-d5f064ffa8f3 1eab809d0fb54c0aad115c1f8dbb943b cb61879fc5554da59f69b8ca9516ae29 - - default default] Checking if we can resize image /var/lib/nova/instances/c471a51f-aa4e-4533-a6fa-9a4716ed23ec/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Jan 23 12:12:32 compute-0 nova_compute[185173]: 2026-01-23 12:12:32.388 185177 DEBUG oslo_concurrency.processutils [None req-fa14b7a0-c326-43e5-a6cb-d5f064ffa8f3 1eab809d0fb54c0aad115c1f8dbb943b cb61879fc5554da59f69b8ca9516ae29 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c471a51f-aa4e-4533-a6fa-9a4716ed23ec/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 12:12:32 compute-0 nova_compute[185173]: 2026-01-23 12:12:32.403 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:12:32 compute-0 nova_compute[185173]: 2026-01-23 12:12:32.404 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:12:32 compute-0 nova_compute[185173]: 2026-01-23 12:12:32.404 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:12:32 compute-0 nova_compute[185173]: 2026-01-23 12:12:32.420 185177 DEBUG nova.virt.libvirt.driver [None req-59feb4f5-b245-43f1-882d-c02a9a6cedb1 e0e1cef9ff584692b12674d39ab8e57c 219dee4c2af34d05ac6e31aa65c35134 - - default default] [instance: 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 23 12:12:32 compute-0 nova_compute[185173]: 2026-01-23 12:12:32.420 185177 DEBUG nova.virt.libvirt.driver [None req-59feb4f5-b245-43f1-882d-c02a9a6cedb1 e0e1cef9ff584692b12674d39ab8e57c 219dee4c2af34d05ac6e31aa65c35134 - - default default] [instance: 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7] Ensure instance console log exists: /var/lib/nova/instances/9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 23 12:12:32 compute-0 nova_compute[185173]: 2026-01-23 12:12:32.421 185177 DEBUG oslo_concurrency.lockutils [None req-59feb4f5-b245-43f1-882d-c02a9a6cedb1 e0e1cef9ff584692b12674d39ab8e57c 219dee4c2af34d05ac6e31aa65c35134 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 12:12:32 compute-0 nova_compute[185173]: 2026-01-23 12:12:32.421 185177 DEBUG oslo_concurrency.lockutils [None req-59feb4f5-b245-43f1-882d-c02a9a6cedb1 e0e1cef9ff584692b12674d39ab8e57c 219dee4c2af34d05ac6e31aa65c35134 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 12:12:32 compute-0 nova_compute[185173]: 2026-01-23 12:12:32.422 185177 DEBUG oslo_concurrency.lockutils [None req-59feb4f5-b245-43f1-882d-c02a9a6cedb1 e0e1cef9ff584692b12674d39ab8e57c 219dee4c2af34d05ac6e31aa65c35134 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 12:12:32 compute-0 nova_compute[185173]: 2026-01-23 12:12:32.471 185177 DEBUG oslo_concurrency.processutils [None req-fa14b7a0-c326-43e5-a6cb-d5f064ffa8f3 1eab809d0fb54c0aad115c1f8dbb943b cb61879fc5554da59f69b8ca9516ae29 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c471a51f-aa4e-4533-a6fa-9a4716ed23ec/disk --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 12:12:32 compute-0 nova_compute[185173]: 2026-01-23 12:12:32.472 185177 DEBUG nova.virt.disk.api [None req-fa14b7a0-c326-43e5-a6cb-d5f064ffa8f3 1eab809d0fb54c0aad115c1f8dbb943b cb61879fc5554da59f69b8ca9516ae29 - - default default] Cannot resize image /var/lib/nova/instances/c471a51f-aa4e-4533-a6fa-9a4716ed23ec/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Jan 23 12:12:32 compute-0 nova_compute[185173]: 2026-01-23 12:12:32.472 185177 DEBUG nova.objects.instance [None req-fa14b7a0-c326-43e5-a6cb-d5f064ffa8f3 1eab809d0fb54c0aad115c1f8dbb943b cb61879fc5554da59f69b8ca9516ae29 - - default default] Lazy-loading 'migration_context' on Instance uuid c471a51f-aa4e-4533-a6fa-9a4716ed23ec obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 23 12:12:32 compute-0 nova_compute[185173]: 2026-01-23 12:12:32.517 185177 DEBUG nova.virt.libvirt.driver [None req-fa14b7a0-c326-43e5-a6cb-d5f064ffa8f3 1eab809d0fb54c0aad115c1f8dbb943b cb61879fc5554da59f69b8ca9516ae29 - - default default] [instance: c471a51f-aa4e-4533-a6fa-9a4716ed23ec] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 23 12:12:32 compute-0 nova_compute[185173]: 2026-01-23 12:12:32.518 185177 DEBUG nova.virt.libvirt.driver [None req-fa14b7a0-c326-43e5-a6cb-d5f064ffa8f3 1eab809d0fb54c0aad115c1f8dbb943b cb61879fc5554da59f69b8ca9516ae29 - - default default] [instance: c471a51f-aa4e-4533-a6fa-9a4716ed23ec] Ensure instance console log exists: /var/lib/nova/instances/c471a51f-aa4e-4533-a6fa-9a4716ed23ec/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 23 12:12:32 compute-0 nova_compute[185173]: 2026-01-23 12:12:32.519 185177 DEBUG oslo_concurrency.lockutils [None req-fa14b7a0-c326-43e5-a6cb-d5f064ffa8f3 1eab809d0fb54c0aad115c1f8dbb943b cb61879fc5554da59f69b8ca9516ae29 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 12:12:32 compute-0 nova_compute[185173]: 2026-01-23 12:12:32.519 185177 DEBUG oslo_concurrency.lockutils [None req-fa14b7a0-c326-43e5-a6cb-d5f064ffa8f3 1eab809d0fb54c0aad115c1f8dbb943b cb61879fc5554da59f69b8ca9516ae29 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 12:12:32 compute-0 nova_compute[185173]: 2026-01-23 12:12:32.520 185177 DEBUG oslo_concurrency.lockutils [None req-fa14b7a0-c326-43e5-a6cb-d5f064ffa8f3 1eab809d0fb54c0aad115c1f8dbb943b cb61879fc5554da59f69b8ca9516ae29 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 12:12:33 compute-0 nova_compute[185173]: 2026-01-23 12:12:33.028 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:12:34 compute-0 nova_compute[185173]: 2026-01-23 12:12:34.274 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:12:34 compute-0 podman[249089]: 2026-01-23 12:12:34.834887999 +0000 UTC m=+0.167931239 container health_status 1cc877fed4914980324cf4c0d6ba23743fd113442cee4d49cc1a59e402757170 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 12:12:35 compute-0 nova_compute[185173]: 2026-01-23 12:12:35.115 185177 DEBUG nova.network.neutron [None req-fa14b7a0-c326-43e5-a6cb-d5f064ffa8f3 1eab809d0fb54c0aad115c1f8dbb943b cb61879fc5554da59f69b8ca9516ae29 - - default default] [instance: c471a51f-aa4e-4533-a6fa-9a4716ed23ec] Successfully created port: 97528895-56d7-4fcd-b4aa-aff6b1af0155 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 23 12:12:35 compute-0 nova_compute[185173]: 2026-01-23 12:12:35.235 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:12:35 compute-0 nova_compute[185173]: 2026-01-23 12:12:35.235 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 23 12:12:35 compute-0 nova_compute[185173]: 2026-01-23 12:12:35.729 185177 DEBUG nova.network.neutron [None req-59feb4f5-b245-43f1-882d-c02a9a6cedb1 e0e1cef9ff584692b12674d39ab8e57c 219dee4c2af34d05ac6e31aa65c35134 - - default default] [instance: 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7] Successfully updated port: d9faf41e-a824-421e-81f1-bbae06da88f5 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 23 12:12:36 compute-0 nova_compute[185173]: 2026-01-23 12:12:36.010 185177 DEBUG oslo_concurrency.lockutils [None req-59feb4f5-b245-43f1-882d-c02a9a6cedb1 e0e1cef9ff584692b12674d39ab8e57c 219dee4c2af34d05ac6e31aa65c35134 - - default default] Acquiring lock "refresh_cache-9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 23 12:12:36 compute-0 nova_compute[185173]: 2026-01-23 12:12:36.010 185177 DEBUG oslo_concurrency.lockutils [None req-59feb4f5-b245-43f1-882d-c02a9a6cedb1 e0e1cef9ff584692b12674d39ab8e57c 219dee4c2af34d05ac6e31aa65c35134 - - default default] Acquired lock "refresh_cache-9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 23 12:12:36 compute-0 nova_compute[185173]: 2026-01-23 12:12:36.011 185177 DEBUG nova.network.neutron [None req-59feb4f5-b245-43f1-882d-c02a9a6cedb1 e0e1cef9ff584692b12674d39ab8e57c 219dee4c2af34d05ac6e31aa65c35134 - - default default] [instance: 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 23 12:12:36 compute-0 nova_compute[185173]: 2026-01-23 12:12:36.235 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:12:36 compute-0 nova_compute[185173]: 2026-01-23 12:12:36.236 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 23 12:12:36 compute-0 nova_compute[185173]: 2026-01-23 12:12:36.236 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 23 12:12:36 compute-0 nova_compute[185173]: 2026-01-23 12:12:36.254 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] [instance: 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 23 12:12:36 compute-0 nova_compute[185173]: 2026-01-23 12:12:36.255 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] [instance: c471a51f-aa4e-4533-a6fa-9a4716ed23ec] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 23 12:12:36 compute-0 nova_compute[185173]: 2026-01-23 12:12:36.255 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 23 12:12:36 compute-0 nova_compute[185173]: 2026-01-23 12:12:36.612 185177 DEBUG nova.compute.manager [req-95c56e42-91bf-430e-9343-59be0c72caed req-7c10c2c0-7e86-415d-9830-e368fa030d67 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] [instance: 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7] Received event network-changed-d9faf41e-a824-421e-81f1-bbae06da88f5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 23 12:12:36 compute-0 nova_compute[185173]: 2026-01-23 12:12:36.612 185177 DEBUG nova.compute.manager [req-95c56e42-91bf-430e-9343-59be0c72caed req-7c10c2c0-7e86-415d-9830-e368fa030d67 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] [instance: 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7] Refreshing instance network info cache due to event network-changed-d9faf41e-a824-421e-81f1-bbae06da88f5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 23 12:12:36 compute-0 nova_compute[185173]: 2026-01-23 12:12:36.613 185177 DEBUG oslo_concurrency.lockutils [req-95c56e42-91bf-430e-9343-59be0c72caed req-7c10c2c0-7e86-415d-9830-e368fa030d67 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] Acquiring lock "refresh_cache-9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 23 12:12:36 compute-0 nova_compute[185173]: 2026-01-23 12:12:36.735 185177 DEBUG nova.network.neutron [None req-59feb4f5-b245-43f1-882d-c02a9a6cedb1 e0e1cef9ff584692b12674d39ab8e57c 219dee4c2af34d05ac6e31aa65c35134 - - default default] [instance: 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 23 12:12:37 compute-0 nova_compute[185173]: 2026-01-23 12:12:37.235 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:12:37 compute-0 nova_compute[185173]: 2026-01-23 12:12:37.701 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:12:37 compute-0 nova_compute[185173]: 2026-01-23 12:12:37.933 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:12:38 compute-0 nova_compute[185173]: 2026-01-23 12:12:38.030 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:12:38 compute-0 podman[249116]: 2026-01-23 12:12:38.748439473 +0000 UTC m=+0.083779140 container health_status 900ef841977ab427bb05b895d10e0cac749b9185cccc7bb7aaf2b3886aa6449a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, name=ubi9, distribution-scope=public, io.openshift.tags=base rhel9, release=1214.1726694543, io.buildah.version=1.29.0, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, config_id=kepler, container_name=kepler, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Jan 23 12:12:38 compute-0 nova_compute[185173]: 2026-01-23 12:12:38.800 185177 DEBUG nova.network.neutron [None req-fa14b7a0-c326-43e5-a6cb-d5f064ffa8f3 1eab809d0fb54c0aad115c1f8dbb943b cb61879fc5554da59f69b8ca9516ae29 - - default default] [instance: c471a51f-aa4e-4533-a6fa-9a4716ed23ec] Successfully updated port: 97528895-56d7-4fcd-b4aa-aff6b1af0155 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 23 12:12:38 compute-0 podman[249117]: 2026-01-23 12:12:38.807990995 +0000 UTC m=+0.126746487 container health_status adf529ba1b6aae11f18bcfacdd7f5850af0b6e6af2250d4a705be9c346f3f5af (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 23 12:12:38 compute-0 nova_compute[185173]: 2026-01-23 12:12:38.828 185177 DEBUG oslo_concurrency.lockutils [None req-fa14b7a0-c326-43e5-a6cb-d5f064ffa8f3 1eab809d0fb54c0aad115c1f8dbb943b cb61879fc5554da59f69b8ca9516ae29 - - default default] Acquiring lock "refresh_cache-c471a51f-aa4e-4533-a6fa-9a4716ed23ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 23 12:12:38 compute-0 nova_compute[185173]: 2026-01-23 12:12:38.829 185177 DEBUG oslo_concurrency.lockutils [None req-fa14b7a0-c326-43e5-a6cb-d5f064ffa8f3 1eab809d0fb54c0aad115c1f8dbb943b cb61879fc5554da59f69b8ca9516ae29 - - default default] Acquired lock "refresh_cache-c471a51f-aa4e-4533-a6fa-9a4716ed23ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 23 12:12:38 compute-0 nova_compute[185173]: 2026-01-23 12:12:38.829 185177 DEBUG nova.network.neutron [None req-fa14b7a0-c326-43e5-a6cb-d5f064ffa8f3 1eab809d0fb54c0aad115c1f8dbb943b cb61879fc5554da59f69b8ca9516ae29 - - default default] [instance: c471a51f-aa4e-4533-a6fa-9a4716ed23ec] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 23 12:12:39 compute-0 nova_compute[185173]: 2026-01-23 12:12:39.028 185177 DEBUG nova.network.neutron [None req-59feb4f5-b245-43f1-882d-c02a9a6cedb1 e0e1cef9ff584692b12674d39ab8e57c 219dee4c2af34d05ac6e31aa65c35134 - - default default] [instance: 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7] Updating instance_info_cache with network_info: [{"id": "d9faf41e-a824-421e-81f1-bbae06da88f5", "address": "fa:16:3e:61:28:24", "network": {"id": "4769a004-5d6e-4d38-99cf-f49693959900", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1719223511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "219dee4c2af34d05ac6e31aa65c35134", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9faf41e-a8", "ovs_interfaceid": "d9faf41e-a824-421e-81f1-bbae06da88f5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 23 12:12:39 compute-0 nova_compute[185173]: 2026-01-23 12:12:39.056 185177 DEBUG oslo_concurrency.lockutils [None req-59feb4f5-b245-43f1-882d-c02a9a6cedb1 e0e1cef9ff584692b12674d39ab8e57c 219dee4c2af34d05ac6e31aa65c35134 - - default default] Releasing lock "refresh_cache-9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 23 12:12:39 compute-0 nova_compute[185173]: 2026-01-23 12:12:39.057 185177 DEBUG nova.compute.manager [None req-59feb4f5-b245-43f1-882d-c02a9a6cedb1 e0e1cef9ff584692b12674d39ab8e57c 219dee4c2af34d05ac6e31aa65c35134 - - default default] [instance: 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7] Instance network_info: |[{"id": "d9faf41e-a824-421e-81f1-bbae06da88f5", "address": "fa:16:3e:61:28:24", "network": {"id": "4769a004-5d6e-4d38-99cf-f49693959900", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1719223511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "219dee4c2af34d05ac6e31aa65c35134", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9faf41e-a8", "ovs_interfaceid": "d9faf41e-a824-421e-81f1-bbae06da88f5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 23 12:12:39 compute-0 nova_compute[185173]: 2026-01-23 12:12:39.057 185177 DEBUG oslo_concurrency.lockutils [req-95c56e42-91bf-430e-9343-59be0c72caed req-7c10c2c0-7e86-415d-9830-e368fa030d67 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] Acquired lock "refresh_cache-9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 23 12:12:39 compute-0 nova_compute[185173]: 2026-01-23 12:12:39.057 185177 DEBUG nova.network.neutron [req-95c56e42-91bf-430e-9343-59be0c72caed req-7c10c2c0-7e86-415d-9830-e368fa030d67 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] [instance: 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7] Refreshing network info cache for port d9faf41e-a824-421e-81f1-bbae06da88f5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 23 12:12:39 compute-0 nova_compute[185173]: 2026-01-23 12:12:39.061 185177 DEBUG nova.virt.libvirt.driver [None req-59feb4f5-b245-43f1-882d-c02a9a6cedb1 e0e1cef9ff584692b12674d39ab8e57c 219dee4c2af34d05ac6e31aa65c35134 - - default default] [instance: 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7] Start _get_guest_xml network_info=[{"id": "d9faf41e-a824-421e-81f1-bbae06da88f5", "address": "fa:16:3e:61:28:24", "network": {"id": "4769a004-5d6e-4d38-99cf-f49693959900", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1719223511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "219dee4c2af34d05ac6e31aa65c35134", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9faf41e-a8", "ovs_interfaceid": "d9faf41e-a824-421e-81f1-bbae06da88f5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T12:11:25Z,direct_url=<?>,disk_format='qcow2',id=701e8d50-6f04-4dc4-b857-9ce72ee86552,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='bd16a0de2f5e4a8480a855ef0e1a3f14',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T12:11:27Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'disk_bus': 'virtio', 'encrypted': False, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'size': 0, 'encryption_options': None, 'encryption_secret_uuid': None, 'boot_index': 0, 'image_id': '701e8d50-6f04-4dc4-b857-9ce72ee86552'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 23 12:12:39 compute-0 nova_compute[185173]: 2026-01-23 12:12:39.070 185177 WARNING nova.virt.libvirt.driver [None req-59feb4f5-b245-43f1-882d-c02a9a6cedb1 e0e1cef9ff584692b12674d39ab8e57c 219dee4c2af34d05ac6e31aa65c35134 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 23 12:12:39 compute-0 nova_compute[185173]: 2026-01-23 12:12:39.083 185177 DEBUG nova.virt.libvirt.host [None req-59feb4f5-b245-43f1-882d-c02a9a6cedb1 e0e1cef9ff584692b12674d39ab8e57c 219dee4c2af34d05ac6e31aa65c35134 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 23 12:12:39 compute-0 nova_compute[185173]: 2026-01-23 12:12:39.084 185177 DEBUG nova.virt.libvirt.host [None req-59feb4f5-b245-43f1-882d-c02a9a6cedb1 e0e1cef9ff584692b12674d39ab8e57c 219dee4c2af34d05ac6e31aa65c35134 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 23 12:12:39 compute-0 nova_compute[185173]: 2026-01-23 12:12:39.089 185177 DEBUG nova.virt.libvirt.host [None req-59feb4f5-b245-43f1-882d-c02a9a6cedb1 e0e1cef9ff584692b12674d39ab8e57c 219dee4c2af34d05ac6e31aa65c35134 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 23 12:12:39 compute-0 nova_compute[185173]: 2026-01-23 12:12:39.090 185177 DEBUG nova.virt.libvirt.host [None req-59feb4f5-b245-43f1-882d-c02a9a6cedb1 e0e1cef9ff584692b12674d39ab8e57c 219dee4c2af34d05ac6e31aa65c35134 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 23 12:12:39 compute-0 nova_compute[185173]: 2026-01-23 12:12:39.090 185177 DEBUG nova.virt.libvirt.driver [None req-59feb4f5-b245-43f1-882d-c02a9a6cedb1 e0e1cef9ff584692b12674d39ab8e57c 219dee4c2af34d05ac6e31aa65c35134 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 23 12:12:39 compute-0 nova_compute[185173]: 2026-01-23 12:12:39.091 185177 DEBUG nova.virt.hardware [None req-59feb4f5-b245-43f1-882d-c02a9a6cedb1 e0e1cef9ff584692b12674d39ab8e57c 219dee4c2af34d05ac6e31aa65c35134 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-23T12:11:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='e853bd28-b25f-4198-9e4c-86f25bfca225',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T12:11:25Z,direct_url=<?>,disk_format='qcow2',id=701e8d50-6f04-4dc4-b857-9ce72ee86552,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='bd16a0de2f5e4a8480a855ef0e1a3f14',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T12:11:27Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 23 12:12:39 compute-0 nova_compute[185173]: 2026-01-23 12:12:39.092 185177 DEBUG nova.virt.hardware [None req-59feb4f5-b245-43f1-882d-c02a9a6cedb1 e0e1cef9ff584692b12674d39ab8e57c 219dee4c2af34d05ac6e31aa65c35134 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 23 12:12:39 compute-0 nova_compute[185173]: 2026-01-23 12:12:39.092 185177 DEBUG nova.virt.hardware [None req-59feb4f5-b245-43f1-882d-c02a9a6cedb1 e0e1cef9ff584692b12674d39ab8e57c 219dee4c2af34d05ac6e31aa65c35134 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 23 12:12:39 compute-0 nova_compute[185173]: 2026-01-23 12:12:39.093 185177 DEBUG nova.virt.hardware [None req-59feb4f5-b245-43f1-882d-c02a9a6cedb1 e0e1cef9ff584692b12674d39ab8e57c 219dee4c2af34d05ac6e31aa65c35134 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 23 12:12:39 compute-0 nova_compute[185173]: 2026-01-23 12:12:39.094 185177 DEBUG nova.virt.hardware [None req-59feb4f5-b245-43f1-882d-c02a9a6cedb1 e0e1cef9ff584692b12674d39ab8e57c 219dee4c2af34d05ac6e31aa65c35134 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 23 12:12:39 compute-0 nova_compute[185173]: 2026-01-23 12:12:39.094 185177 DEBUG nova.virt.hardware [None req-59feb4f5-b245-43f1-882d-c02a9a6cedb1 e0e1cef9ff584692b12674d39ab8e57c 219dee4c2af34d05ac6e31aa65c35134 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 23 12:12:39 compute-0 nova_compute[185173]: 2026-01-23 12:12:39.094 185177 DEBUG nova.virt.hardware [None req-59feb4f5-b245-43f1-882d-c02a9a6cedb1 e0e1cef9ff584692b12674d39ab8e57c 219dee4c2af34d05ac6e31aa65c35134 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 23 12:12:39 compute-0 nova_compute[185173]: 2026-01-23 12:12:39.095 185177 DEBUG nova.virt.hardware [None req-59feb4f5-b245-43f1-882d-c02a9a6cedb1 e0e1cef9ff584692b12674d39ab8e57c 219dee4c2af34d05ac6e31aa65c35134 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 23 12:12:39 compute-0 nova_compute[185173]: 2026-01-23 12:12:39.095 185177 DEBUG nova.virt.hardware [None req-59feb4f5-b245-43f1-882d-c02a9a6cedb1 e0e1cef9ff584692b12674d39ab8e57c 219dee4c2af34d05ac6e31aa65c35134 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 23 12:12:39 compute-0 nova_compute[185173]: 2026-01-23 12:12:39.096 185177 DEBUG nova.virt.hardware [None req-59feb4f5-b245-43f1-882d-c02a9a6cedb1 e0e1cef9ff584692b12674d39ab8e57c 219dee4c2af34d05ac6e31aa65c35134 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 23 12:12:39 compute-0 nova_compute[185173]: 2026-01-23 12:12:39.096 185177 DEBUG nova.virt.hardware [None req-59feb4f5-b245-43f1-882d-c02a9a6cedb1 e0e1cef9ff584692b12674d39ab8e57c 219dee4c2af34d05ac6e31aa65c35134 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 23 12:12:39 compute-0 nova_compute[185173]: 2026-01-23 12:12:39.102 185177 DEBUG nova.virt.libvirt.vif [None req-59feb4f5-b245-43f1-882d-c02a9a6cedb1 e0e1cef9ff584692b12674d39ab8e57c 219dee4c2af34d05ac6e31aa65c35134 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-23T12:12:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-1715966339',display_name='tempest-AttachInterfacesUnderV243Test-server-1715966339',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-1715966339',id=6,image_ref='701e8d50-6f04-4dc4-b857-9ce72ee86552',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEqmGwMs+GWA2kBdB4XPVlAKSeC3SPMc1Ev+Ck7JIowgPqk0CpuRBThR9XO/GLkQiSvv9436emOVY6urrnKAd2pjFMRHKI8XtWgwsQ+c31zM5Xh1CRL28uvDLVOROdrsVA==',key_name='tempest-keypair-1350791428',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='219dee4c2af34d05ac6e31aa65c35134',ramdisk_id='',reservation_id='r-etgv44cg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='701e8d50-6f04-4dc4-b857-9ce72ee86552',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesUnderV243Test-2029647340',owner_user_name='tempest-AttachInterfacesUnderV243Test-2029647340-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T12:12:26Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='e0e1cef9ff584692b12674d39ab8e57c',uuid=9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d9faf41e-a824-421e-81f1-bbae06da88f5", "address": "fa:16:3e:61:28:24", "network": {"id": "4769a004-5d6e-4d38-99cf-f49693959900", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1719223511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "219dee4c2af34d05ac6e31aa65c35134", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9faf41e-a8", "ovs_interfaceid": "d9faf41e-a824-421e-81f1-bbae06da88f5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 23 12:12:39 compute-0 nova_compute[185173]: 2026-01-23 12:12:39.102 185177 DEBUG nova.network.os_vif_util [None req-59feb4f5-b245-43f1-882d-c02a9a6cedb1 e0e1cef9ff584692b12674d39ab8e57c 219dee4c2af34d05ac6e31aa65c35134 - - default default] Converting VIF {"id": "d9faf41e-a824-421e-81f1-bbae06da88f5", "address": "fa:16:3e:61:28:24", "network": {"id": "4769a004-5d6e-4d38-99cf-f49693959900", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1719223511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "219dee4c2af34d05ac6e31aa65c35134", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9faf41e-a8", "ovs_interfaceid": "d9faf41e-a824-421e-81f1-bbae06da88f5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 23 12:12:39 compute-0 nova_compute[185173]: 2026-01-23 12:12:39.104 185177 DEBUG nova.network.os_vif_util [None req-59feb4f5-b245-43f1-882d-c02a9a6cedb1 e0e1cef9ff584692b12674d39ab8e57c 219dee4c2af34d05ac6e31aa65c35134 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:61:28:24,bridge_name='br-int',has_traffic_filtering=True,id=d9faf41e-a824-421e-81f1-bbae06da88f5,network=Network(4769a004-5d6e-4d38-99cf-f49693959900),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd9faf41e-a8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 23 12:12:39 compute-0 nova_compute[185173]: 2026-01-23 12:12:39.105 185177 DEBUG nova.objects.instance [None req-59feb4f5-b245-43f1-882d-c02a9a6cedb1 e0e1cef9ff584692b12674d39ab8e57c 219dee4c2af34d05ac6e31aa65c35134 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 23 12:12:39 compute-0 nova_compute[185173]: 2026-01-23 12:12:39.125 185177 DEBUG nova.virt.libvirt.driver [None req-59feb4f5-b245-43f1-882d-c02a9a6cedb1 e0e1cef9ff584692b12674d39ab8e57c 219dee4c2af34d05ac6e31aa65c35134 - - default default] [instance: 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7] End _get_guest_xml xml=<domain type="kvm">
Jan 23 12:12:39 compute-0 nova_compute[185173]:   <uuid>9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7</uuid>
Jan 23 12:12:39 compute-0 nova_compute[185173]:   <name>instance-00000006</name>
Jan 23 12:12:39 compute-0 nova_compute[185173]:   <memory>131072</memory>
Jan 23 12:12:39 compute-0 nova_compute[185173]:   <vcpu>1</vcpu>
Jan 23 12:12:39 compute-0 nova_compute[185173]:   <metadata>
Jan 23 12:12:39 compute-0 nova_compute[185173]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 23 12:12:39 compute-0 nova_compute[185173]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 23 12:12:39 compute-0 nova_compute[185173]:       <nova:name>tempest-AttachInterfacesUnderV243Test-server-1715966339</nova:name>
Jan 23 12:12:39 compute-0 nova_compute[185173]:       <nova:creationTime>2026-01-23 12:12:39</nova:creationTime>
Jan 23 12:12:39 compute-0 nova_compute[185173]:       <nova:flavor name="m1.nano">
Jan 23 12:12:39 compute-0 nova_compute[185173]:         <nova:memory>128</nova:memory>
Jan 23 12:12:39 compute-0 nova_compute[185173]:         <nova:disk>1</nova:disk>
Jan 23 12:12:39 compute-0 nova_compute[185173]:         <nova:swap>0</nova:swap>
Jan 23 12:12:39 compute-0 nova_compute[185173]:         <nova:ephemeral>0</nova:ephemeral>
Jan 23 12:12:39 compute-0 nova_compute[185173]:         <nova:vcpus>1</nova:vcpus>
Jan 23 12:12:39 compute-0 nova_compute[185173]:       </nova:flavor>
Jan 23 12:12:39 compute-0 nova_compute[185173]:       <nova:owner>
Jan 23 12:12:39 compute-0 nova_compute[185173]:         <nova:user uuid="e0e1cef9ff584692b12674d39ab8e57c">tempest-AttachInterfacesUnderV243Test-2029647340-project-member</nova:user>
Jan 23 12:12:39 compute-0 nova_compute[185173]:         <nova:project uuid="219dee4c2af34d05ac6e31aa65c35134">tempest-AttachInterfacesUnderV243Test-2029647340</nova:project>
Jan 23 12:12:39 compute-0 nova_compute[185173]:       </nova:owner>
Jan 23 12:12:39 compute-0 nova_compute[185173]:       <nova:root type="image" uuid="701e8d50-6f04-4dc4-b857-9ce72ee86552"/>
Jan 23 12:12:39 compute-0 nova_compute[185173]:       <nova:ports>
Jan 23 12:12:39 compute-0 nova_compute[185173]:         <nova:port uuid="d9faf41e-a824-421e-81f1-bbae06da88f5">
Jan 23 12:12:39 compute-0 nova_compute[185173]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Jan 23 12:12:39 compute-0 nova_compute[185173]:         </nova:port>
Jan 23 12:12:39 compute-0 nova_compute[185173]:       </nova:ports>
Jan 23 12:12:39 compute-0 nova_compute[185173]:     </nova:instance>
Jan 23 12:12:39 compute-0 nova_compute[185173]:   </metadata>
Jan 23 12:12:39 compute-0 nova_compute[185173]:   <sysinfo type="smbios">
Jan 23 12:12:39 compute-0 nova_compute[185173]:     <system>
Jan 23 12:12:39 compute-0 nova_compute[185173]:       <entry name="manufacturer">RDO</entry>
Jan 23 12:12:39 compute-0 nova_compute[185173]:       <entry name="product">OpenStack Compute</entry>
Jan 23 12:12:39 compute-0 nova_compute[185173]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 23 12:12:39 compute-0 nova_compute[185173]:       <entry name="serial">9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7</entry>
Jan 23 12:12:39 compute-0 nova_compute[185173]:       <entry name="uuid">9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7</entry>
Jan 23 12:12:39 compute-0 nova_compute[185173]:       <entry name="family">Virtual Machine</entry>
Jan 23 12:12:39 compute-0 nova_compute[185173]:     </system>
Jan 23 12:12:39 compute-0 nova_compute[185173]:   </sysinfo>
Jan 23 12:12:39 compute-0 nova_compute[185173]:   <os>
Jan 23 12:12:39 compute-0 nova_compute[185173]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 23 12:12:39 compute-0 nova_compute[185173]:     <boot dev="hd"/>
Jan 23 12:12:39 compute-0 nova_compute[185173]:     <smbios mode="sysinfo"/>
Jan 23 12:12:39 compute-0 nova_compute[185173]:   </os>
Jan 23 12:12:39 compute-0 nova_compute[185173]:   <features>
Jan 23 12:12:39 compute-0 nova_compute[185173]:     <acpi/>
Jan 23 12:12:39 compute-0 nova_compute[185173]:     <apic/>
Jan 23 12:12:39 compute-0 nova_compute[185173]:     <vmcoreinfo/>
Jan 23 12:12:39 compute-0 nova_compute[185173]:   </features>
Jan 23 12:12:39 compute-0 nova_compute[185173]:   <clock offset="utc">
Jan 23 12:12:39 compute-0 nova_compute[185173]:     <timer name="pit" tickpolicy="delay"/>
Jan 23 12:12:39 compute-0 nova_compute[185173]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 23 12:12:39 compute-0 nova_compute[185173]:     <timer name="hpet" present="no"/>
Jan 23 12:12:39 compute-0 nova_compute[185173]:   </clock>
Jan 23 12:12:39 compute-0 nova_compute[185173]:   <cpu mode="host-model" match="exact">
Jan 23 12:12:39 compute-0 nova_compute[185173]:     <topology sockets="1" cores="1" threads="1"/>
Jan 23 12:12:39 compute-0 nova_compute[185173]:   </cpu>
Jan 23 12:12:39 compute-0 nova_compute[185173]:   <devices>
Jan 23 12:12:39 compute-0 nova_compute[185173]:     <disk type="file" device="disk">
Jan 23 12:12:39 compute-0 nova_compute[185173]:       <driver name="qemu" type="qcow2" cache="none"/>
Jan 23 12:12:39 compute-0 nova_compute[185173]:       <source file="/var/lib/nova/instances/9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7/disk"/>
Jan 23 12:12:39 compute-0 nova_compute[185173]:       <target dev="vda" bus="virtio"/>
Jan 23 12:12:39 compute-0 nova_compute[185173]:     </disk>
Jan 23 12:12:39 compute-0 nova_compute[185173]:     <disk type="file" device="cdrom">
Jan 23 12:12:39 compute-0 nova_compute[185173]:       <driver name="qemu" type="raw" cache="none"/>
Jan 23 12:12:39 compute-0 nova_compute[185173]:       <source file="/var/lib/nova/instances/9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7/disk.config"/>
Jan 23 12:12:39 compute-0 nova_compute[185173]:       <target dev="sda" bus="sata"/>
Jan 23 12:12:39 compute-0 nova_compute[185173]:     </disk>
Jan 23 12:12:39 compute-0 nova_compute[185173]:     <interface type="ethernet">
Jan 23 12:12:39 compute-0 nova_compute[185173]:       <mac address="fa:16:3e:61:28:24"/>
Jan 23 12:12:39 compute-0 nova_compute[185173]:       <model type="virtio"/>
Jan 23 12:12:39 compute-0 nova_compute[185173]:       <driver name="vhost" rx_queue_size="512"/>
Jan 23 12:12:39 compute-0 nova_compute[185173]:       <mtu size="1442"/>
Jan 23 12:12:39 compute-0 nova_compute[185173]:       <target dev="tapd9faf41e-a8"/>
Jan 23 12:12:39 compute-0 nova_compute[185173]:     </interface>
Jan 23 12:12:39 compute-0 nova_compute[185173]:     <serial type="pty">
Jan 23 12:12:39 compute-0 nova_compute[185173]:       <log file="/var/lib/nova/instances/9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7/console.log" append="off"/>
Jan 23 12:12:39 compute-0 nova_compute[185173]:     </serial>
Jan 23 12:12:39 compute-0 nova_compute[185173]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 23 12:12:39 compute-0 nova_compute[185173]:     <video>
Jan 23 12:12:39 compute-0 nova_compute[185173]:       <model type="virtio"/>
Jan 23 12:12:39 compute-0 nova_compute[185173]:     </video>
Jan 23 12:12:39 compute-0 nova_compute[185173]:     <input type="tablet" bus="usb"/>
Jan 23 12:12:39 compute-0 nova_compute[185173]:     <rng model="virtio">
Jan 23 12:12:39 compute-0 nova_compute[185173]:       <backend model="random">/dev/urandom</backend>
Jan 23 12:12:39 compute-0 nova_compute[185173]:     </rng>
Jan 23 12:12:39 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root"/>
Jan 23 12:12:39 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 12:12:39 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 12:12:39 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 12:12:39 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 12:12:39 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 12:12:39 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 12:12:39 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 12:12:39 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 12:12:39 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 12:12:39 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 12:12:39 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 12:12:39 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 12:12:39 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 12:12:39 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 12:12:39 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 12:12:39 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 12:12:39 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 12:12:39 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 12:12:39 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 12:12:39 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 12:12:39 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 12:12:39 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 12:12:39 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 12:12:39 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 12:12:39 compute-0 nova_compute[185173]:     <controller type="usb" index="0"/>
Jan 23 12:12:39 compute-0 nova_compute[185173]:     <memballoon model="virtio">
Jan 23 12:12:39 compute-0 nova_compute[185173]:       <stats period="10"/>
Jan 23 12:12:39 compute-0 nova_compute[185173]:     </memballoon>
Jan 23 12:12:39 compute-0 nova_compute[185173]:   </devices>
Jan 23 12:12:39 compute-0 nova_compute[185173]: </domain>
Jan 23 12:12:39 compute-0 nova_compute[185173]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 23 12:12:39 compute-0 nova_compute[185173]: 2026-01-23 12:12:39.126 185177 DEBUG nova.compute.manager [None req-59feb4f5-b245-43f1-882d-c02a9a6cedb1 e0e1cef9ff584692b12674d39ab8e57c 219dee4c2af34d05ac6e31aa65c35134 - - default default] [instance: 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7] Preparing to wait for external event network-vif-plugged-d9faf41e-a824-421e-81f1-bbae06da88f5 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 23 12:12:39 compute-0 nova_compute[185173]: 2026-01-23 12:12:39.127 185177 DEBUG oslo_concurrency.lockutils [None req-59feb4f5-b245-43f1-882d-c02a9a6cedb1 e0e1cef9ff584692b12674d39ab8e57c 219dee4c2af34d05ac6e31aa65c35134 - - default default] Acquiring lock "9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 12:12:39 compute-0 nova_compute[185173]: 2026-01-23 12:12:39.127 185177 DEBUG oslo_concurrency.lockutils [None req-59feb4f5-b245-43f1-882d-c02a9a6cedb1 e0e1cef9ff584692b12674d39ab8e57c 219dee4c2af34d05ac6e31aa65c35134 - - default default] Lock "9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 12:12:39 compute-0 nova_compute[185173]: 2026-01-23 12:12:39.128 185177 DEBUG oslo_concurrency.lockutils [None req-59feb4f5-b245-43f1-882d-c02a9a6cedb1 e0e1cef9ff584692b12674d39ab8e57c 219dee4c2af34d05ac6e31aa65c35134 - - default default] Lock "9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 12:12:39 compute-0 nova_compute[185173]: 2026-01-23 12:12:39.128 185177 DEBUG nova.virt.libvirt.vif [None req-59feb4f5-b245-43f1-882d-c02a9a6cedb1 e0e1cef9ff584692b12674d39ab8e57c 219dee4c2af34d05ac6e31aa65c35134 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-23T12:12:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-1715966339',display_name='tempest-AttachInterfacesUnderV243Test-server-1715966339',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-1715966339',id=6,image_ref='701e8d50-6f04-4dc4-b857-9ce72ee86552',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEqmGwMs+GWA2kBdB4XPVlAKSeC3SPMc1Ev+Ck7JIowgPqk0CpuRBThR9XO/GLkQiSvv9436emOVY6urrnKAd2pjFMRHKI8XtWgwsQ+c31zM5Xh1CRL28uvDLVOROdrsVA==',key_name='tempest-keypair-1350791428',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='219dee4c2af34d05ac6e31aa65c35134',ramdisk_id='',reservation_id='r-etgv44cg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='701e8d50-6f04-4dc4-b857-9ce72ee86552',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesUnderV243Test-2029647340',owner_user_name='tempest-AttachInterfacesUnderV243Test-2029647340-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T12:12:26Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='e0e1cef9ff584692b12674d39ab8e57c',uuid=9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d9faf41e-a824-421e-81f1-bbae06da88f5", "address": "fa:16:3e:61:28:24", "network": {"id": "4769a004-5d6e-4d38-99cf-f49693959900", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1719223511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "219dee4c2af34d05ac6e31aa65c35134", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9faf41e-a8", "ovs_interfaceid": "d9faf41e-a824-421e-81f1-bbae06da88f5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 23 12:12:39 compute-0 nova_compute[185173]: 2026-01-23 12:12:39.129 185177 DEBUG nova.network.os_vif_util [None req-59feb4f5-b245-43f1-882d-c02a9a6cedb1 e0e1cef9ff584692b12674d39ab8e57c 219dee4c2af34d05ac6e31aa65c35134 - - default default] Converting VIF {"id": "d9faf41e-a824-421e-81f1-bbae06da88f5", "address": "fa:16:3e:61:28:24", "network": {"id": "4769a004-5d6e-4d38-99cf-f49693959900", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1719223511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "219dee4c2af34d05ac6e31aa65c35134", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9faf41e-a8", "ovs_interfaceid": "d9faf41e-a824-421e-81f1-bbae06da88f5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 23 12:12:39 compute-0 nova_compute[185173]: 2026-01-23 12:12:39.129 185177 DEBUG nova.network.os_vif_util [None req-59feb4f5-b245-43f1-882d-c02a9a6cedb1 e0e1cef9ff584692b12674d39ab8e57c 219dee4c2af34d05ac6e31aa65c35134 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:61:28:24,bridge_name='br-int',has_traffic_filtering=True,id=d9faf41e-a824-421e-81f1-bbae06da88f5,network=Network(4769a004-5d6e-4d38-99cf-f49693959900),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd9faf41e-a8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 23 12:12:39 compute-0 nova_compute[185173]: 2026-01-23 12:12:39.130 185177 DEBUG os_vif [None req-59feb4f5-b245-43f1-882d-c02a9a6cedb1 e0e1cef9ff584692b12674d39ab8e57c 219dee4c2af34d05ac6e31aa65c35134 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:61:28:24,bridge_name='br-int',has_traffic_filtering=True,id=d9faf41e-a824-421e-81f1-bbae06da88f5,network=Network(4769a004-5d6e-4d38-99cf-f49693959900),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd9faf41e-a8') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 23 12:12:39 compute-0 nova_compute[185173]: 2026-01-23 12:12:39.130 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:12:39 compute-0 nova_compute[185173]: 2026-01-23 12:12:39.131 185177 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 23 12:12:39 compute-0 nova_compute[185173]: 2026-01-23 12:12:39.131 185177 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 23 12:12:39 compute-0 nova_compute[185173]: 2026-01-23 12:12:39.134 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:12:39 compute-0 nova_compute[185173]: 2026-01-23 12:12:39.135 185177 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd9faf41e-a8, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 23 12:12:39 compute-0 nova_compute[185173]: 2026-01-23 12:12:39.135 185177 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd9faf41e-a8, col_values=(('external_ids', {'iface-id': 'd9faf41e-a824-421e-81f1-bbae06da88f5', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:61:28:24', 'vm-uuid': '9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 23 12:12:39 compute-0 nova_compute[185173]: 2026-01-23 12:12:39.137 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:12:39 compute-0 NetworkManager[56133]: <info>  [1769170359.1386] manager: (tapd9faf41e-a8): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/33)
Jan 23 12:12:39 compute-0 nova_compute[185173]: 2026-01-23 12:12:39.140 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 23 12:12:39 compute-0 nova_compute[185173]: 2026-01-23 12:12:39.147 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:12:39 compute-0 nova_compute[185173]: 2026-01-23 12:12:39.148 185177 INFO os_vif [None req-59feb4f5-b245-43f1-882d-c02a9a6cedb1 e0e1cef9ff584692b12674d39ab8e57c 219dee4c2af34d05ac6e31aa65c35134 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:61:28:24,bridge_name='br-int',has_traffic_filtering=True,id=d9faf41e-a824-421e-81f1-bbae06da88f5,network=Network(4769a004-5d6e-4d38-99cf-f49693959900),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd9faf41e-a8')
Jan 23 12:12:39 compute-0 nova_compute[185173]: 2026-01-23 12:12:39.221 185177 DEBUG nova.virt.libvirt.driver [None req-59feb4f5-b245-43f1-882d-c02a9a6cedb1 e0e1cef9ff584692b12674d39ab8e57c 219dee4c2af34d05ac6e31aa65c35134 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 23 12:12:39 compute-0 nova_compute[185173]: 2026-01-23 12:12:39.221 185177 DEBUG nova.virt.libvirt.driver [None req-59feb4f5-b245-43f1-882d-c02a9a6cedb1 e0e1cef9ff584692b12674d39ab8e57c 219dee4c2af34d05ac6e31aa65c35134 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 23 12:12:39 compute-0 nova_compute[185173]: 2026-01-23 12:12:39.221 185177 DEBUG nova.virt.libvirt.driver [None req-59feb4f5-b245-43f1-882d-c02a9a6cedb1 e0e1cef9ff584692b12674d39ab8e57c 219dee4c2af34d05ac6e31aa65c35134 - - default default] No VIF found with MAC fa:16:3e:61:28:24, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 23 12:12:39 compute-0 nova_compute[185173]: 2026-01-23 12:12:39.222 185177 INFO nova.virt.libvirt.driver [None req-59feb4f5-b245-43f1-882d-c02a9a6cedb1 e0e1cef9ff584692b12674d39ab8e57c 219dee4c2af34d05ac6e31aa65c35134 - - default default] [instance: 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7] Using config drive
Jan 23 12:12:40 compute-0 nova_compute[185173]: 2026-01-23 12:12:40.050 185177 DEBUG nova.network.neutron [None req-fa14b7a0-c326-43e5-a6cb-d5f064ffa8f3 1eab809d0fb54c0aad115c1f8dbb943b cb61879fc5554da59f69b8ca9516ae29 - - default default] [instance: c471a51f-aa4e-4533-a6fa-9a4716ed23ec] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 23 12:12:41 compute-0 nova_compute[185173]: 2026-01-23 12:12:41.512 185177 INFO nova.virt.libvirt.driver [None req-59feb4f5-b245-43f1-882d-c02a9a6cedb1 e0e1cef9ff584692b12674d39ab8e57c 219dee4c2af34d05ac6e31aa65c35134 - - default default] [instance: 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7] Creating config drive at /var/lib/nova/instances/9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7/disk.config
Jan 23 12:12:41 compute-0 nova_compute[185173]: 2026-01-23 12:12:41.518 185177 DEBUG oslo_concurrency.processutils [None req-59feb4f5-b245-43f1-882d-c02a9a6cedb1 e0e1cef9ff584692b12674d39ab8e57c 219dee4c2af34d05ac6e31aa65c35134 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjb_jc4af execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 12:12:41 compute-0 nova_compute[185173]: 2026-01-23 12:12:41.663 185177 DEBUG oslo_concurrency.processutils [None req-59feb4f5-b245-43f1-882d-c02a9a6cedb1 e0e1cef9ff584692b12674d39ab8e57c 219dee4c2af34d05ac6e31aa65c35134 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjb_jc4af" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 12:12:41 compute-0 kernel: tapd9faf41e-a8: entered promiscuous mode
Jan 23 12:12:41 compute-0 NetworkManager[56133]: <info>  [1769170361.7590] manager: (tapd9faf41e-a8): new Tun device (/org/freedesktop/NetworkManager/Devices/34)
Jan 23 12:12:41 compute-0 nova_compute[185173]: 2026-01-23 12:12:41.764 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:12:41 compute-0 ovn_controller[97581]: 2026-01-23T12:12:41Z|00066|binding|INFO|Claiming lport d9faf41e-a824-421e-81f1-bbae06da88f5 for this chassis.
Jan 23 12:12:41 compute-0 ovn_controller[97581]: 2026-01-23T12:12:41Z|00067|binding|INFO|d9faf41e-a824-421e-81f1-bbae06da88f5: Claiming fa:16:3e:61:28:24 10.100.0.9
Jan 23 12:12:41 compute-0 nova_compute[185173]: 2026-01-23 12:12:41.774 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:12:41 compute-0 nova_compute[185173]: 2026-01-23 12:12:41.780 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:12:41 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:12:41.793 106832 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:61:28:24 10.100.0.9'], port_security=['fa:16:3e:61:28:24 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4769a004-5d6e-4d38-99cf-f49693959900', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '219dee4c2af34d05ac6e31aa65c35134', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'cea7ef28-0e53-4d1b-9894-bdd04ace9b30', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=651e9d7f-9191-4046-89f0-ae232a05078e, chassis=[<ovs.db.idl.Row object at 0x7fceaba80790>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fceaba80790>], logical_port=d9faf41e-a824-421e-81f1-bbae06da88f5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 23 12:12:41 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:12:41.796 106832 INFO neutron.agent.ovn.metadata.agent [-] Port d9faf41e-a824-421e-81f1-bbae06da88f5 in datapath 4769a004-5d6e-4d38-99cf-f49693959900 bound to our chassis
Jan 23 12:12:41 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:12:41.799 106832 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4769a004-5d6e-4d38-99cf-f49693959900
Jan 23 12:12:41 compute-0 systemd-udevd[249175]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 12:12:41 compute-0 systemd-machined[156550]: New machine qemu-6-instance-00000006.
Jan 23 12:12:41 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:12:41.817 238267 DEBUG oslo.privsep.daemon [-] privsep: reply[11c9508e-8aa7-4c88-8ae2-eca119ac9ea2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 12:12:41 compute-0 NetworkManager[56133]: <info>  [1769170361.8191] device (tapd9faf41e-a8): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 23 12:12:41 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:12:41.819 106832 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4769a004-51 in ovnmeta-4769a004-5d6e-4d38-99cf-f49693959900 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 23 12:12:41 compute-0 NetworkManager[56133]: <info>  [1769170361.8203] device (tapd9faf41e-a8): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 23 12:12:41 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:12:41.821 238267 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4769a004-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 23 12:12:41 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:12:41.822 238267 DEBUG oslo.privsep.daemon [-] privsep: reply[87453745-cfde-4c1d-a84c-5b877e811fec]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 12:12:41 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:12:41.823 238267 DEBUG oslo.privsep.daemon [-] privsep: reply[2e07e29b-82cd-4e96-9eaa-560abdc09d82]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 12:12:41 compute-0 systemd[1]: Started Virtual Machine qemu-6-instance-00000006.
Jan 23 12:12:41 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:12:41.843 107372 DEBUG oslo.privsep.daemon [-] privsep: reply[fda695ab-d9d1-4adb-baf4-4e4d115ca313]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 12:12:41 compute-0 ovn_controller[97581]: 2026-01-23T12:12:41Z|00068|binding|INFO|Setting lport d9faf41e-a824-421e-81f1-bbae06da88f5 ovn-installed in OVS
Jan 23 12:12:41 compute-0 ovn_controller[97581]: 2026-01-23T12:12:41Z|00069|binding|INFO|Setting lport d9faf41e-a824-421e-81f1-bbae06da88f5 up in Southbound
Jan 23 12:12:41 compute-0 nova_compute[185173]: 2026-01-23 12:12:41.847 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:12:41 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:12:41.871 238267 DEBUG oslo.privsep.daemon [-] privsep: reply[e8f2777d-1162-4aae-8d18-a0661d4bc4fd]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 12:12:41 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:12:41.900 238300 DEBUG oslo.privsep.daemon [-] privsep: reply[0912d6c5-5b64-43d5-a274-0d7879122143]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 12:12:41 compute-0 systemd-udevd[249178]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 12:12:41 compute-0 NetworkManager[56133]: <info>  [1769170361.9082] manager: (tap4769a004-50): new Veth device (/org/freedesktop/NetworkManager/Devices/35)
Jan 23 12:12:41 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:12:41.907 238267 DEBUG oslo.privsep.daemon [-] privsep: reply[e905091c-7b2e-4616-9589-dda7c5d9b900]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 12:12:41 compute-0 nova_compute[185173]: 2026-01-23 12:12:41.937 185177 DEBUG nova.compute.manager [req-aa30962c-f9c3-416e-9757-073670c30a64 req-5bf8b210-37c4-45fb-a00b-ba1f73bd98c4 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] [instance: c471a51f-aa4e-4533-a6fa-9a4716ed23ec] Received event network-changed-97528895-56d7-4fcd-b4aa-aff6b1af0155 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 23 12:12:41 compute-0 nova_compute[185173]: 2026-01-23 12:12:41.937 185177 DEBUG nova.compute.manager [req-aa30962c-f9c3-416e-9757-073670c30a64 req-5bf8b210-37c4-45fb-a00b-ba1f73bd98c4 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] [instance: c471a51f-aa4e-4533-a6fa-9a4716ed23ec] Refreshing instance network info cache due to event network-changed-97528895-56d7-4fcd-b4aa-aff6b1af0155. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 23 12:12:41 compute-0 nova_compute[185173]: 2026-01-23 12:12:41.937 185177 DEBUG oslo_concurrency.lockutils [req-aa30962c-f9c3-416e-9757-073670c30a64 req-5bf8b210-37c4-45fb-a00b-ba1f73bd98c4 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] Acquiring lock "refresh_cache-c471a51f-aa4e-4533-a6fa-9a4716ed23ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 23 12:12:41 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:12:41.939 238300 DEBUG oslo.privsep.daemon [-] privsep: reply[823cd46b-55a1-4155-8f72-78aa79a98704]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 12:12:41 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:12:41.944 238300 DEBUG oslo.privsep.daemon [-] privsep: reply[f9714965-604c-4052-938a-3fe90d552a12]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 12:12:41 compute-0 NetworkManager[56133]: <info>  [1769170361.9678] device (tap4769a004-50): carrier: link connected
Jan 23 12:12:41 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:12:41.972 238300 DEBUG oslo.privsep.daemon [-] privsep: reply[63ea8012-820b-4cd1-bcb0-fa89481c4cf6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 12:12:42 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:12:42.000 238267 DEBUG oslo.privsep.daemon [-] privsep: reply[d96d9853-77a2-4673-8972-ba291de62a11]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4769a004-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4c:67:d3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 528259, 'reachable_time': 24087, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 249210, 'error': None, 'target': 'ovnmeta-4769a004-5d6e-4d38-99cf-f49693959900', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 12:12:42 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:12:42.025 238267 DEBUG oslo.privsep.daemon [-] privsep: reply[b34d4259-8c3c-4241-9f93-c13ef9910ceb]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe4c:67d3'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 528259, 'tstamp': 528259}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 249211, 'error': None, 'target': 'ovnmeta-4769a004-5d6e-4d38-99cf-f49693959900', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 12:12:42 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:12:42.042 238267 DEBUG oslo.privsep.daemon [-] privsep: reply[0493bc80-4993-409e-aa08-ac268594515f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4769a004-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4c:67:d3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 528259, 'reachable_time': 24087, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 249215, 'error': None, 'target': 'ovnmeta-4769a004-5d6e-4d38-99cf-f49693959900', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 12:12:42 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:12:42.075 238267 DEBUG oslo.privsep.daemon [-] privsep: reply[bedb1258-a348-42bb-9fb9-a1b1862ddd1e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 12:12:42 compute-0 nova_compute[185173]: 2026-01-23 12:12:42.141 185177 DEBUG nova.virt.driver [None req-161a4fc3-746d-453a-ae54-40285a29550b - - - - - -] Emitting event <LifecycleEvent: 1769170362.140851, 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 23 12:12:42 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:12:42.141 238267 DEBUG oslo.privsep.daemon [-] privsep: reply[bf089c29-b58c-470d-a763-9a328a9066f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 12:12:42 compute-0 nova_compute[185173]: 2026-01-23 12:12:42.142 185177 INFO nova.compute.manager [None req-161a4fc3-746d-453a-ae54-40285a29550b - - - - - -] [instance: 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7] VM Started (Lifecycle Event)
Jan 23 12:12:42 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:12:42.143 106832 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4769a004-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 23 12:12:42 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:12:42.143 106832 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 23 12:12:42 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:12:42.143 106832 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4769a004-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 23 12:12:42 compute-0 nova_compute[185173]: 2026-01-23 12:12:42.145 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:12:42 compute-0 kernel: tap4769a004-50: entered promiscuous mode
Jan 23 12:12:42 compute-0 NetworkManager[56133]: <info>  [1769170362.1477] manager: (tap4769a004-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/36)
Jan 23 12:12:42 compute-0 nova_compute[185173]: 2026-01-23 12:12:42.148 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:12:42 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:12:42.149 106832 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4769a004-50, col_values=(('external_ids', {'iface-id': '9cbf67d5-0442-4a05-87a4-97f78502296a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 23 12:12:42 compute-0 nova_compute[185173]: 2026-01-23 12:12:42.150 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:12:42 compute-0 ovn_controller[97581]: 2026-01-23T12:12:42Z|00070|binding|INFO|Releasing lport 9cbf67d5-0442-4a05-87a4-97f78502296a from this chassis (sb_readonly=0)
Jan 23 12:12:42 compute-0 nova_compute[185173]: 2026-01-23 12:12:42.152 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:12:42 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:12:42.153 106832 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4769a004-5d6e-4d38-99cf-f49693959900.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4769a004-5d6e-4d38-99cf-f49693959900.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 23 12:12:42 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:12:42.155 238267 DEBUG oslo.privsep.daemon [-] privsep: reply[b786ce8b-706f-4c83-a379-ce7d75167c6a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 12:12:42 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:12:42.158 106832 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 23 12:12:42 compute-0 ovn_metadata_agent[106827]: global
Jan 23 12:12:42 compute-0 ovn_metadata_agent[106827]:     log         /dev/log local0 debug
Jan 23 12:12:42 compute-0 ovn_metadata_agent[106827]:     log-tag     haproxy-metadata-proxy-4769a004-5d6e-4d38-99cf-f49693959900
Jan 23 12:12:42 compute-0 ovn_metadata_agent[106827]:     user        root
Jan 23 12:12:42 compute-0 ovn_metadata_agent[106827]:     group       root
Jan 23 12:12:42 compute-0 ovn_metadata_agent[106827]:     maxconn     1024
Jan 23 12:12:42 compute-0 ovn_metadata_agent[106827]:     pidfile     /var/lib/neutron/external/pids/4769a004-5d6e-4d38-99cf-f49693959900.pid.haproxy
Jan 23 12:12:42 compute-0 ovn_metadata_agent[106827]:     daemon
Jan 23 12:12:42 compute-0 ovn_metadata_agent[106827]: 
Jan 23 12:12:42 compute-0 ovn_metadata_agent[106827]: defaults
Jan 23 12:12:42 compute-0 ovn_metadata_agent[106827]:     log global
Jan 23 12:12:42 compute-0 ovn_metadata_agent[106827]:     mode http
Jan 23 12:12:42 compute-0 ovn_metadata_agent[106827]:     option httplog
Jan 23 12:12:42 compute-0 ovn_metadata_agent[106827]:     option dontlognull
Jan 23 12:12:42 compute-0 ovn_metadata_agent[106827]:     option http-server-close
Jan 23 12:12:42 compute-0 ovn_metadata_agent[106827]:     option forwardfor
Jan 23 12:12:42 compute-0 ovn_metadata_agent[106827]:     retries                 3
Jan 23 12:12:42 compute-0 ovn_metadata_agent[106827]:     timeout http-request    30s
Jan 23 12:12:42 compute-0 ovn_metadata_agent[106827]:     timeout connect         30s
Jan 23 12:12:42 compute-0 ovn_metadata_agent[106827]:     timeout client          32s
Jan 23 12:12:42 compute-0 ovn_metadata_agent[106827]:     timeout server          32s
Jan 23 12:12:42 compute-0 ovn_metadata_agent[106827]:     timeout http-keep-alive 30s
Jan 23 12:12:42 compute-0 ovn_metadata_agent[106827]: 
Jan 23 12:12:42 compute-0 ovn_metadata_agent[106827]: 
Jan 23 12:12:42 compute-0 ovn_metadata_agent[106827]: listen listener
Jan 23 12:12:42 compute-0 ovn_metadata_agent[106827]:     bind 169.254.169.254:80
Jan 23 12:12:42 compute-0 ovn_metadata_agent[106827]:     server metadata /var/lib/neutron/metadata_proxy
Jan 23 12:12:42 compute-0 ovn_metadata_agent[106827]:     http-request add-header X-OVN-Network-ID 4769a004-5d6e-4d38-99cf-f49693959900
Jan 23 12:12:42 compute-0 ovn_metadata_agent[106827]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 23 12:12:42 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:12:42.159 106832 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4769a004-5d6e-4d38-99cf-f49693959900', 'env', 'PROCESS_TAG=haproxy-4769a004-5d6e-4d38-99cf-f49693959900', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4769a004-5d6e-4d38-99cf-f49693959900.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 23 12:12:42 compute-0 nova_compute[185173]: 2026-01-23 12:12:42.177 185177 DEBUG nova.compute.manager [None req-161a4fc3-746d-453a-ae54-40285a29550b - - - - - -] [instance: 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 23 12:12:42 compute-0 nova_compute[185173]: 2026-01-23 12:12:42.182 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:12:42 compute-0 nova_compute[185173]: 2026-01-23 12:12:42.184 185177 DEBUG nova.virt.driver [None req-161a4fc3-746d-453a-ae54-40285a29550b - - - - - -] Emitting event <LifecycleEvent: 1769170362.1409848, 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 23 12:12:42 compute-0 nova_compute[185173]: 2026-01-23 12:12:42.185 185177 INFO nova.compute.manager [None req-161a4fc3-746d-453a-ae54-40285a29550b - - - - - -] [instance: 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7] VM Paused (Lifecycle Event)
Jan 23 12:12:42 compute-0 nova_compute[185173]: 2026-01-23 12:12:42.215 185177 DEBUG nova.compute.manager [None req-161a4fc3-746d-453a-ae54-40285a29550b - - - - - -] [instance: 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 23 12:12:42 compute-0 nova_compute[185173]: 2026-01-23 12:12:42.221 185177 DEBUG nova.compute.manager [None req-161a4fc3-746d-453a-ae54-40285a29550b - - - - - -] [instance: 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 23 12:12:42 compute-0 nova_compute[185173]: 2026-01-23 12:12:42.260 185177 INFO nova.compute.manager [None req-161a4fc3-746d-453a-ae54-40285a29550b - - - - - -] [instance: 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 23 12:12:42 compute-0 podman[249251]: 2026-01-23 12:12:42.585491621 +0000 UTC m=+0.078189361 container create a5a27c8082ebb9e34d6b3f83d2411f8ff66d2c8ade38e856a3c36a46005fea69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4769a004-5d6e-4d38-99cf-f49693959900, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team)
Jan 23 12:12:42 compute-0 podman[249251]: 2026-01-23 12:12:42.536742609 +0000 UTC m=+0.029440379 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 23 12:12:42 compute-0 systemd[1]: Started libpod-conmon-a5a27c8082ebb9e34d6b3f83d2411f8ff66d2c8ade38e856a3c36a46005fea69.scope.
Jan 23 12:12:42 compute-0 systemd[1]: Started libcrun container.
Jan 23 12:12:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/566c1e117137a54bf716741cf68ffec03a9ab64a9e4f88292c403a495388d8f4/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 23 12:12:42 compute-0 podman[249251]: 2026-01-23 12:12:42.688250386 +0000 UTC m=+0.180948126 container init a5a27c8082ebb9e34d6b3f83d2411f8ff66d2c8ade38e856a3c36a46005fea69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4769a004-5d6e-4d38-99cf-f49693959900, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team)
Jan 23 12:12:42 compute-0 podman[249251]: 2026-01-23 12:12:42.69481546 +0000 UTC m=+0.187513180 container start a5a27c8082ebb9e34d6b3f83d2411f8ff66d2c8ade38e856a3c36a46005fea69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4769a004-5d6e-4d38-99cf-f49693959900, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 23 12:12:42 compute-0 neutron-haproxy-ovnmeta-4769a004-5d6e-4d38-99cf-f49693959900[249266]: [NOTICE]   (249270) : New worker (249272) forked
Jan 23 12:12:42 compute-0 neutron-haproxy-ovnmeta-4769a004-5d6e-4d38-99cf-f49693959900[249266]: [NOTICE]   (249270) : Loading success.
Jan 23 12:12:43 compute-0 nova_compute[185173]: 2026-01-23 12:12:43.032 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:12:43 compute-0 nova_compute[185173]: 2026-01-23 12:12:43.098 185177 DEBUG nova.compute.manager [req-1ce9437e-4d6f-4d8c-a7be-9904914c1153 req-c9c1df73-5888-492c-a71c-0030696eda57 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] [instance: 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7] Received event network-vif-plugged-d9faf41e-a824-421e-81f1-bbae06da88f5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 23 12:12:43 compute-0 nova_compute[185173]: 2026-01-23 12:12:43.099 185177 DEBUG oslo_concurrency.lockutils [req-1ce9437e-4d6f-4d8c-a7be-9904914c1153 req-c9c1df73-5888-492c-a71c-0030696eda57 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] Acquiring lock "9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 12:12:43 compute-0 nova_compute[185173]: 2026-01-23 12:12:43.099 185177 DEBUG oslo_concurrency.lockutils [req-1ce9437e-4d6f-4d8c-a7be-9904914c1153 req-c9c1df73-5888-492c-a71c-0030696eda57 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] Lock "9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 12:12:43 compute-0 nova_compute[185173]: 2026-01-23 12:12:43.100 185177 DEBUG oslo_concurrency.lockutils [req-1ce9437e-4d6f-4d8c-a7be-9904914c1153 req-c9c1df73-5888-492c-a71c-0030696eda57 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] Lock "9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 12:12:43 compute-0 nova_compute[185173]: 2026-01-23 12:12:43.101 185177 DEBUG nova.compute.manager [req-1ce9437e-4d6f-4d8c-a7be-9904914c1153 req-c9c1df73-5888-492c-a71c-0030696eda57 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] [instance: 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7] Processing event network-vif-plugged-d9faf41e-a824-421e-81f1-bbae06da88f5 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 23 12:12:43 compute-0 nova_compute[185173]: 2026-01-23 12:12:43.102 185177 DEBUG nova.compute.manager [None req-59feb4f5-b245-43f1-882d-c02a9a6cedb1 e0e1cef9ff584692b12674d39ab8e57c 219dee4c2af34d05ac6e31aa65c35134 - - default default] [instance: 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 23 12:12:43 compute-0 nova_compute[185173]: 2026-01-23 12:12:43.106 185177 DEBUG nova.virt.driver [None req-161a4fc3-746d-453a-ae54-40285a29550b - - - - - -] Emitting event <LifecycleEvent: 1769170363.105896, 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 23 12:12:43 compute-0 nova_compute[185173]: 2026-01-23 12:12:43.107 185177 INFO nova.compute.manager [None req-161a4fc3-746d-453a-ae54-40285a29550b - - - - - -] [instance: 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7] VM Resumed (Lifecycle Event)
Jan 23 12:12:43 compute-0 nova_compute[185173]: 2026-01-23 12:12:43.110 185177 DEBUG nova.virt.libvirt.driver [None req-59feb4f5-b245-43f1-882d-c02a9a6cedb1 e0e1cef9ff584692b12674d39ab8e57c 219dee4c2af34d05ac6e31aa65c35134 - - default default] [instance: 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 23 12:12:43 compute-0 nova_compute[185173]: 2026-01-23 12:12:43.115 185177 INFO nova.virt.libvirt.driver [-] [instance: 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7] Instance spawned successfully.
Jan 23 12:12:43 compute-0 nova_compute[185173]: 2026-01-23 12:12:43.116 185177 DEBUG nova.virt.libvirt.driver [None req-59feb4f5-b245-43f1-882d-c02a9a6cedb1 e0e1cef9ff584692b12674d39ab8e57c 219dee4c2af34d05ac6e31aa65c35134 - - default default] [instance: 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 23 12:12:43 compute-0 nova_compute[185173]: 2026-01-23 12:12:43.146 185177 DEBUG nova.compute.manager [None req-161a4fc3-746d-453a-ae54-40285a29550b - - - - - -] [instance: 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 23 12:12:43 compute-0 nova_compute[185173]: 2026-01-23 12:12:43.155 185177 DEBUG nova.compute.manager [None req-161a4fc3-746d-453a-ae54-40285a29550b - - - - - -] [instance: 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 23 12:12:43 compute-0 nova_compute[185173]: 2026-01-23 12:12:43.159 185177 DEBUG nova.virt.libvirt.driver [None req-59feb4f5-b245-43f1-882d-c02a9a6cedb1 e0e1cef9ff584692b12674d39ab8e57c 219dee4c2af34d05ac6e31aa65c35134 - - default default] [instance: 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 23 12:12:43 compute-0 nova_compute[185173]: 2026-01-23 12:12:43.160 185177 DEBUG nova.virt.libvirt.driver [None req-59feb4f5-b245-43f1-882d-c02a9a6cedb1 e0e1cef9ff584692b12674d39ab8e57c 219dee4c2af34d05ac6e31aa65c35134 - - default default] [instance: 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 23 12:12:43 compute-0 nova_compute[185173]: 2026-01-23 12:12:43.160 185177 DEBUG nova.virt.libvirt.driver [None req-59feb4f5-b245-43f1-882d-c02a9a6cedb1 e0e1cef9ff584692b12674d39ab8e57c 219dee4c2af34d05ac6e31aa65c35134 - - default default] [instance: 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 23 12:12:43 compute-0 nova_compute[185173]: 2026-01-23 12:12:43.161 185177 DEBUG nova.virt.libvirt.driver [None req-59feb4f5-b245-43f1-882d-c02a9a6cedb1 e0e1cef9ff584692b12674d39ab8e57c 219dee4c2af34d05ac6e31aa65c35134 - - default default] [instance: 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 23 12:12:43 compute-0 nova_compute[185173]: 2026-01-23 12:12:43.161 185177 DEBUG nova.virt.libvirt.driver [None req-59feb4f5-b245-43f1-882d-c02a9a6cedb1 e0e1cef9ff584692b12674d39ab8e57c 219dee4c2af34d05ac6e31aa65c35134 - - default default] [instance: 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 23 12:12:43 compute-0 nova_compute[185173]: 2026-01-23 12:12:43.162 185177 DEBUG nova.virt.libvirt.driver [None req-59feb4f5-b245-43f1-882d-c02a9a6cedb1 e0e1cef9ff584692b12674d39ab8e57c 219dee4c2af34d05ac6e31aa65c35134 - - default default] [instance: 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 23 12:12:43 compute-0 nova_compute[185173]: 2026-01-23 12:12:43.195 185177 INFO nova.compute.manager [None req-161a4fc3-746d-453a-ae54-40285a29550b - - - - - -] [instance: 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 23 12:12:43 compute-0 nova_compute[185173]: 2026-01-23 12:12:43.247 185177 INFO nova.compute.manager [None req-59feb4f5-b245-43f1-882d-c02a9a6cedb1 e0e1cef9ff584692b12674d39ab8e57c 219dee4c2af34d05ac6e31aa65c35134 - - default default] [instance: 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7] Took 16.25 seconds to spawn the instance on the hypervisor.
Jan 23 12:12:43 compute-0 nova_compute[185173]: 2026-01-23 12:12:43.248 185177 DEBUG nova.compute.manager [None req-59feb4f5-b245-43f1-882d-c02a9a6cedb1 e0e1cef9ff584692b12674d39ab8e57c 219dee4c2af34d05ac6e31aa65c35134 - - default default] [instance: 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 23 12:12:43 compute-0 nova_compute[185173]: 2026-01-23 12:12:43.429 185177 INFO nova.compute.manager [None req-59feb4f5-b245-43f1-882d-c02a9a6cedb1 e0e1cef9ff584692b12674d39ab8e57c 219dee4c2af34d05ac6e31aa65c35134 - - default default] [instance: 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7] Took 16.87 seconds to build instance.
Jan 23 12:12:43 compute-0 nova_compute[185173]: 2026-01-23 12:12:43.580 185177 DEBUG oslo_concurrency.lockutils [None req-59feb4f5-b245-43f1-882d-c02a9a6cedb1 e0e1cef9ff584692b12674d39ab8e57c 219dee4c2af34d05ac6e31aa65c35134 - - default default] Lock "9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 17.138s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 12:12:44 compute-0 nova_compute[185173]: 2026-01-23 12:12:44.138 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:12:44 compute-0 nova_compute[185173]: 2026-01-23 12:12:44.405 185177 DEBUG nova.network.neutron [None req-fa14b7a0-c326-43e5-a6cb-d5f064ffa8f3 1eab809d0fb54c0aad115c1f8dbb943b cb61879fc5554da59f69b8ca9516ae29 - - default default] [instance: c471a51f-aa4e-4533-a6fa-9a4716ed23ec] Updating instance_info_cache with network_info: [{"id": "97528895-56d7-4fcd-b4aa-aff6b1af0155", "address": "fa:16:3e:7b:31:0f", "network": {"id": "6aefc98d-c645-43fb-8c17-03d341d4ab6a", "bridge": "br-int", "label": "tempest-ServersTestJSON-2095190510-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cb61879fc5554da59f69b8ca9516ae29", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97528895-56", "ovs_interfaceid": "97528895-56d7-4fcd-b4aa-aff6b1af0155", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 23 12:12:44 compute-0 nova_compute[185173]: 2026-01-23 12:12:44.501 185177 DEBUG oslo_concurrency.lockutils [None req-fa14b7a0-c326-43e5-a6cb-d5f064ffa8f3 1eab809d0fb54c0aad115c1f8dbb943b cb61879fc5554da59f69b8ca9516ae29 - - default default] Releasing lock "refresh_cache-c471a51f-aa4e-4533-a6fa-9a4716ed23ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 23 12:12:44 compute-0 nova_compute[185173]: 2026-01-23 12:12:44.502 185177 DEBUG nova.compute.manager [None req-fa14b7a0-c326-43e5-a6cb-d5f064ffa8f3 1eab809d0fb54c0aad115c1f8dbb943b cb61879fc5554da59f69b8ca9516ae29 - - default default] [instance: c471a51f-aa4e-4533-a6fa-9a4716ed23ec] Instance network_info: |[{"id": "97528895-56d7-4fcd-b4aa-aff6b1af0155", "address": "fa:16:3e:7b:31:0f", "network": {"id": "6aefc98d-c645-43fb-8c17-03d341d4ab6a", "bridge": "br-int", "label": "tempest-ServersTestJSON-2095190510-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cb61879fc5554da59f69b8ca9516ae29", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97528895-56", "ovs_interfaceid": "97528895-56d7-4fcd-b4aa-aff6b1af0155", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 23 12:12:44 compute-0 nova_compute[185173]: 2026-01-23 12:12:44.504 185177 DEBUG oslo_concurrency.lockutils [req-aa30962c-f9c3-416e-9757-073670c30a64 req-5bf8b210-37c4-45fb-a00b-ba1f73bd98c4 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] Acquired lock "refresh_cache-c471a51f-aa4e-4533-a6fa-9a4716ed23ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 23 12:12:44 compute-0 nova_compute[185173]: 2026-01-23 12:12:44.505 185177 DEBUG nova.network.neutron [req-aa30962c-f9c3-416e-9757-073670c30a64 req-5bf8b210-37c4-45fb-a00b-ba1f73bd98c4 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] [instance: c471a51f-aa4e-4533-a6fa-9a4716ed23ec] Refreshing network info cache for port 97528895-56d7-4fcd-b4aa-aff6b1af0155 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 23 12:12:44 compute-0 nova_compute[185173]: 2026-01-23 12:12:44.513 185177 DEBUG nova.virt.libvirt.driver [None req-fa14b7a0-c326-43e5-a6cb-d5f064ffa8f3 1eab809d0fb54c0aad115c1f8dbb943b cb61879fc5554da59f69b8ca9516ae29 - - default default] [instance: c471a51f-aa4e-4533-a6fa-9a4716ed23ec] Start _get_guest_xml network_info=[{"id": "97528895-56d7-4fcd-b4aa-aff6b1af0155", "address": "fa:16:3e:7b:31:0f", "network": {"id": "6aefc98d-c645-43fb-8c17-03d341d4ab6a", "bridge": "br-int", "label": "tempest-ServersTestJSON-2095190510-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cb61879fc5554da59f69b8ca9516ae29", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97528895-56", "ovs_interfaceid": "97528895-56d7-4fcd-b4aa-aff6b1af0155", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T12:11:25Z,direct_url=<?>,disk_format='qcow2',id=701e8d50-6f04-4dc4-b857-9ce72ee86552,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='bd16a0de2f5e4a8480a855ef0e1a3f14',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T12:11:27Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'disk_bus': 'virtio', 'encrypted': False, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'size': 0, 'encryption_options': None, 'encryption_secret_uuid': None, 'boot_index': 0, 'image_id': '701e8d50-6f04-4dc4-b857-9ce72ee86552'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 23 12:12:44 compute-0 nova_compute[185173]: 2026-01-23 12:12:44.526 185177 WARNING nova.virt.libvirt.driver [None req-fa14b7a0-c326-43e5-a6cb-d5f064ffa8f3 1eab809d0fb54c0aad115c1f8dbb943b cb61879fc5554da59f69b8ca9516ae29 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 23 12:12:44 compute-0 nova_compute[185173]: 2026-01-23 12:12:44.535 185177 DEBUG nova.virt.libvirt.host [None req-fa14b7a0-c326-43e5-a6cb-d5f064ffa8f3 1eab809d0fb54c0aad115c1f8dbb943b cb61879fc5554da59f69b8ca9516ae29 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 23 12:12:44 compute-0 nova_compute[185173]: 2026-01-23 12:12:44.537 185177 DEBUG nova.virt.libvirt.host [None req-fa14b7a0-c326-43e5-a6cb-d5f064ffa8f3 1eab809d0fb54c0aad115c1f8dbb943b cb61879fc5554da59f69b8ca9516ae29 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 23 12:12:44 compute-0 nova_compute[185173]: 2026-01-23 12:12:44.545 185177 DEBUG nova.virt.libvirt.host [None req-fa14b7a0-c326-43e5-a6cb-d5f064ffa8f3 1eab809d0fb54c0aad115c1f8dbb943b cb61879fc5554da59f69b8ca9516ae29 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 23 12:12:44 compute-0 nova_compute[185173]: 2026-01-23 12:12:44.546 185177 DEBUG nova.virt.libvirt.host [None req-fa14b7a0-c326-43e5-a6cb-d5f064ffa8f3 1eab809d0fb54c0aad115c1f8dbb943b cb61879fc5554da59f69b8ca9516ae29 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 23 12:12:44 compute-0 nova_compute[185173]: 2026-01-23 12:12:44.547 185177 DEBUG nova.virt.libvirt.driver [None req-fa14b7a0-c326-43e5-a6cb-d5f064ffa8f3 1eab809d0fb54c0aad115c1f8dbb943b cb61879fc5554da59f69b8ca9516ae29 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 23 12:12:44 compute-0 nova_compute[185173]: 2026-01-23 12:12:44.548 185177 DEBUG nova.virt.hardware [None req-fa14b7a0-c326-43e5-a6cb-d5f064ffa8f3 1eab809d0fb54c0aad115c1f8dbb943b cb61879fc5554da59f69b8ca9516ae29 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-23T12:11:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='e853bd28-b25f-4198-9e4c-86f25bfca225',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T12:11:25Z,direct_url=<?>,disk_format='qcow2',id=701e8d50-6f04-4dc4-b857-9ce72ee86552,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='bd16a0de2f5e4a8480a855ef0e1a3f14',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T12:11:27Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 23 12:12:44 compute-0 nova_compute[185173]: 2026-01-23 12:12:44.549 185177 DEBUG nova.virt.hardware [None req-fa14b7a0-c326-43e5-a6cb-d5f064ffa8f3 1eab809d0fb54c0aad115c1f8dbb943b cb61879fc5554da59f69b8ca9516ae29 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 23 12:12:44 compute-0 nova_compute[185173]: 2026-01-23 12:12:44.550 185177 DEBUG nova.virt.hardware [None req-fa14b7a0-c326-43e5-a6cb-d5f064ffa8f3 1eab809d0fb54c0aad115c1f8dbb943b cb61879fc5554da59f69b8ca9516ae29 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 23 12:12:44 compute-0 nova_compute[185173]: 2026-01-23 12:12:44.551 185177 DEBUG nova.virt.hardware [None req-fa14b7a0-c326-43e5-a6cb-d5f064ffa8f3 1eab809d0fb54c0aad115c1f8dbb943b cb61879fc5554da59f69b8ca9516ae29 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 23 12:12:44 compute-0 nova_compute[185173]: 2026-01-23 12:12:44.551 185177 DEBUG nova.virt.hardware [None req-fa14b7a0-c326-43e5-a6cb-d5f064ffa8f3 1eab809d0fb54c0aad115c1f8dbb943b cb61879fc5554da59f69b8ca9516ae29 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 23 12:12:44 compute-0 nova_compute[185173]: 2026-01-23 12:12:44.552 185177 DEBUG nova.virt.hardware [None req-fa14b7a0-c326-43e5-a6cb-d5f064ffa8f3 1eab809d0fb54c0aad115c1f8dbb943b cb61879fc5554da59f69b8ca9516ae29 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 23 12:12:44 compute-0 nova_compute[185173]: 2026-01-23 12:12:44.553 185177 DEBUG nova.virt.hardware [None req-fa14b7a0-c326-43e5-a6cb-d5f064ffa8f3 1eab809d0fb54c0aad115c1f8dbb943b cb61879fc5554da59f69b8ca9516ae29 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 23 12:12:44 compute-0 nova_compute[185173]: 2026-01-23 12:12:44.553 185177 DEBUG nova.virt.hardware [None req-fa14b7a0-c326-43e5-a6cb-d5f064ffa8f3 1eab809d0fb54c0aad115c1f8dbb943b cb61879fc5554da59f69b8ca9516ae29 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 23 12:12:44 compute-0 nova_compute[185173]: 2026-01-23 12:12:44.554 185177 DEBUG nova.virt.hardware [None req-fa14b7a0-c326-43e5-a6cb-d5f064ffa8f3 1eab809d0fb54c0aad115c1f8dbb943b cb61879fc5554da59f69b8ca9516ae29 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 23 12:12:44 compute-0 nova_compute[185173]: 2026-01-23 12:12:44.555 185177 DEBUG nova.virt.hardware [None req-fa14b7a0-c326-43e5-a6cb-d5f064ffa8f3 1eab809d0fb54c0aad115c1f8dbb943b cb61879fc5554da59f69b8ca9516ae29 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 23 12:12:44 compute-0 nova_compute[185173]: 2026-01-23 12:12:44.556 185177 DEBUG nova.virt.hardware [None req-fa14b7a0-c326-43e5-a6cb-d5f064ffa8f3 1eab809d0fb54c0aad115c1f8dbb943b cb61879fc5554da59f69b8ca9516ae29 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 23 12:12:44 compute-0 nova_compute[185173]: 2026-01-23 12:12:44.563 185177 DEBUG nova.virt.libvirt.vif [None req-fa14b7a0-c326-43e5-a6cb-d5f064ffa8f3 1eab809d0fb54c0aad115c1f8dbb943b cb61879fc5554da59f69b8ca9516ae29 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-23T12:12:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1686603901',display_name='tempest-ServersTestJSON-server-1686603901',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1686603901',id=7,image_ref='701e8d50-6f04-4dc4-b857-9ce72ee86552',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBARV7Qsnn5noWYkAfH1/R0v6YRz6i0V3zKYue4g9EFh+/mSMhNE90PZs0Gd5IWFMJ45aIBp7G+ZcSxXnIQIxk+0JErYjG6yNUMZw+LgAqxXqzrzGG+Zhyo3jWgYZuKHw2A==',key_name='tempest-keypair-765321846',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='cb61879fc5554da59f69b8ca9516ae29',ramdisk_id='',reservation_id='r-vgvzwdb9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='701e8d50-6f04-4dc4-b857-9ce72ee86552',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1812239947',owner_user_name='tempest-ServersTestJSON-1812239947-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T12:12:29Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1eab809d0fb54c0aad115c1f8dbb943b',uuid=c471a51f-aa4e-4533-a6fa-9a4716ed23ec,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "97528895-56d7-4fcd-b4aa-aff6b1af0155", "address": "fa:16:3e:7b:31:0f", "network": {"id": "6aefc98d-c645-43fb-8c17-03d341d4ab6a", "bridge": "br-int", "label": "tempest-ServersTestJSON-2095190510-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cb61879fc5554da59f69b8ca9516ae29", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97528895-56", "ovs_interfaceid": "97528895-56d7-4fcd-b4aa-aff6b1af0155", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 23 12:12:44 compute-0 nova_compute[185173]: 2026-01-23 12:12:44.564 185177 DEBUG nova.network.os_vif_util [None req-fa14b7a0-c326-43e5-a6cb-d5f064ffa8f3 1eab809d0fb54c0aad115c1f8dbb943b cb61879fc5554da59f69b8ca9516ae29 - - default default] Converting VIF {"id": "97528895-56d7-4fcd-b4aa-aff6b1af0155", "address": "fa:16:3e:7b:31:0f", "network": {"id": "6aefc98d-c645-43fb-8c17-03d341d4ab6a", "bridge": "br-int", "label": "tempest-ServersTestJSON-2095190510-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cb61879fc5554da59f69b8ca9516ae29", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97528895-56", "ovs_interfaceid": "97528895-56d7-4fcd-b4aa-aff6b1af0155", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 23 12:12:44 compute-0 nova_compute[185173]: 2026-01-23 12:12:44.566 185177 DEBUG nova.network.os_vif_util [None req-fa14b7a0-c326-43e5-a6cb-d5f064ffa8f3 1eab809d0fb54c0aad115c1f8dbb943b cb61879fc5554da59f69b8ca9516ae29 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7b:31:0f,bridge_name='br-int',has_traffic_filtering=True,id=97528895-56d7-4fcd-b4aa-aff6b1af0155,network=Network(6aefc98d-c645-43fb-8c17-03d341d4ab6a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap97528895-56') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 23 12:12:44 compute-0 nova_compute[185173]: 2026-01-23 12:12:44.568 185177 DEBUG nova.objects.instance [None req-fa14b7a0-c326-43e5-a6cb-d5f064ffa8f3 1eab809d0fb54c0aad115c1f8dbb943b cb61879fc5554da59f69b8ca9516ae29 - - default default] Lazy-loading 'pci_devices' on Instance uuid c471a51f-aa4e-4533-a6fa-9a4716ed23ec obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 23 12:12:44 compute-0 nova_compute[185173]: 2026-01-23 12:12:44.596 185177 DEBUG nova.virt.libvirt.driver [None req-fa14b7a0-c326-43e5-a6cb-d5f064ffa8f3 1eab809d0fb54c0aad115c1f8dbb943b cb61879fc5554da59f69b8ca9516ae29 - - default default] [instance: c471a51f-aa4e-4533-a6fa-9a4716ed23ec] End _get_guest_xml xml=<domain type="kvm">
Jan 23 12:12:44 compute-0 nova_compute[185173]:   <uuid>c471a51f-aa4e-4533-a6fa-9a4716ed23ec</uuid>
Jan 23 12:12:44 compute-0 nova_compute[185173]:   <name>instance-00000007</name>
Jan 23 12:12:44 compute-0 nova_compute[185173]:   <memory>131072</memory>
Jan 23 12:12:44 compute-0 nova_compute[185173]:   <vcpu>1</vcpu>
Jan 23 12:12:44 compute-0 nova_compute[185173]:   <metadata>
Jan 23 12:12:44 compute-0 nova_compute[185173]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 23 12:12:44 compute-0 nova_compute[185173]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 23 12:12:44 compute-0 nova_compute[185173]:       <nova:name>tempest-ServersTestJSON-server-1686603901</nova:name>
Jan 23 12:12:44 compute-0 nova_compute[185173]:       <nova:creationTime>2026-01-23 12:12:44</nova:creationTime>
Jan 23 12:12:44 compute-0 nova_compute[185173]:       <nova:flavor name="m1.nano">
Jan 23 12:12:44 compute-0 nova_compute[185173]:         <nova:memory>128</nova:memory>
Jan 23 12:12:44 compute-0 nova_compute[185173]:         <nova:disk>1</nova:disk>
Jan 23 12:12:44 compute-0 nova_compute[185173]:         <nova:swap>0</nova:swap>
Jan 23 12:12:44 compute-0 nova_compute[185173]:         <nova:ephemeral>0</nova:ephemeral>
Jan 23 12:12:44 compute-0 nova_compute[185173]:         <nova:vcpus>1</nova:vcpus>
Jan 23 12:12:44 compute-0 nova_compute[185173]:       </nova:flavor>
Jan 23 12:12:44 compute-0 nova_compute[185173]:       <nova:owner>
Jan 23 12:12:44 compute-0 nova_compute[185173]:         <nova:user uuid="1eab809d0fb54c0aad115c1f8dbb943b">tempest-ServersTestJSON-1812239947-project-member</nova:user>
Jan 23 12:12:44 compute-0 nova_compute[185173]:         <nova:project uuid="cb61879fc5554da59f69b8ca9516ae29">tempest-ServersTestJSON-1812239947</nova:project>
Jan 23 12:12:44 compute-0 nova_compute[185173]:       </nova:owner>
Jan 23 12:12:44 compute-0 nova_compute[185173]:       <nova:root type="image" uuid="701e8d50-6f04-4dc4-b857-9ce72ee86552"/>
Jan 23 12:12:44 compute-0 nova_compute[185173]:       <nova:ports>
Jan 23 12:12:44 compute-0 nova_compute[185173]:         <nova:port uuid="97528895-56d7-4fcd-b4aa-aff6b1af0155">
Jan 23 12:12:44 compute-0 nova_compute[185173]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Jan 23 12:12:44 compute-0 nova_compute[185173]:         </nova:port>
Jan 23 12:12:44 compute-0 nova_compute[185173]:       </nova:ports>
Jan 23 12:12:44 compute-0 nova_compute[185173]:     </nova:instance>
Jan 23 12:12:44 compute-0 nova_compute[185173]:   </metadata>
Jan 23 12:12:44 compute-0 nova_compute[185173]:   <sysinfo type="smbios">
Jan 23 12:12:44 compute-0 nova_compute[185173]:     <system>
Jan 23 12:12:44 compute-0 nova_compute[185173]:       <entry name="manufacturer">RDO</entry>
Jan 23 12:12:44 compute-0 nova_compute[185173]:       <entry name="product">OpenStack Compute</entry>
Jan 23 12:12:44 compute-0 nova_compute[185173]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 23 12:12:44 compute-0 nova_compute[185173]:       <entry name="serial">c471a51f-aa4e-4533-a6fa-9a4716ed23ec</entry>
Jan 23 12:12:44 compute-0 nova_compute[185173]:       <entry name="uuid">c471a51f-aa4e-4533-a6fa-9a4716ed23ec</entry>
Jan 23 12:12:44 compute-0 nova_compute[185173]:       <entry name="family">Virtual Machine</entry>
Jan 23 12:12:44 compute-0 nova_compute[185173]:     </system>
Jan 23 12:12:44 compute-0 nova_compute[185173]:   </sysinfo>
Jan 23 12:12:44 compute-0 nova_compute[185173]:   <os>
Jan 23 12:12:44 compute-0 nova_compute[185173]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 23 12:12:44 compute-0 nova_compute[185173]:     <boot dev="hd"/>
Jan 23 12:12:44 compute-0 nova_compute[185173]:     <smbios mode="sysinfo"/>
Jan 23 12:12:44 compute-0 nova_compute[185173]:   </os>
Jan 23 12:12:44 compute-0 nova_compute[185173]:   <features>
Jan 23 12:12:44 compute-0 nova_compute[185173]:     <acpi/>
Jan 23 12:12:44 compute-0 nova_compute[185173]:     <apic/>
Jan 23 12:12:44 compute-0 nova_compute[185173]:     <vmcoreinfo/>
Jan 23 12:12:44 compute-0 nova_compute[185173]:   </features>
Jan 23 12:12:44 compute-0 nova_compute[185173]:   <clock offset="utc">
Jan 23 12:12:44 compute-0 nova_compute[185173]:     <timer name="pit" tickpolicy="delay"/>
Jan 23 12:12:44 compute-0 nova_compute[185173]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 23 12:12:44 compute-0 nova_compute[185173]:     <timer name="hpet" present="no"/>
Jan 23 12:12:44 compute-0 nova_compute[185173]:   </clock>
Jan 23 12:12:44 compute-0 nova_compute[185173]:   <cpu mode="host-model" match="exact">
Jan 23 12:12:44 compute-0 nova_compute[185173]:     <topology sockets="1" cores="1" threads="1"/>
Jan 23 12:12:44 compute-0 nova_compute[185173]:   </cpu>
Jan 23 12:12:44 compute-0 nova_compute[185173]:   <devices>
Jan 23 12:12:44 compute-0 nova_compute[185173]:     <disk type="file" device="disk">
Jan 23 12:12:44 compute-0 nova_compute[185173]:       <driver name="qemu" type="qcow2" cache="none"/>
Jan 23 12:12:44 compute-0 nova_compute[185173]:       <source file="/var/lib/nova/instances/c471a51f-aa4e-4533-a6fa-9a4716ed23ec/disk"/>
Jan 23 12:12:44 compute-0 nova_compute[185173]:       <target dev="vda" bus="virtio"/>
Jan 23 12:12:44 compute-0 nova_compute[185173]:     </disk>
Jan 23 12:12:44 compute-0 nova_compute[185173]:     <disk type="file" device="cdrom">
Jan 23 12:12:44 compute-0 nova_compute[185173]:       <driver name="qemu" type="raw" cache="none"/>
Jan 23 12:12:44 compute-0 nova_compute[185173]:       <source file="/var/lib/nova/instances/c471a51f-aa4e-4533-a6fa-9a4716ed23ec/disk.config"/>
Jan 23 12:12:44 compute-0 nova_compute[185173]:       <target dev="sda" bus="sata"/>
Jan 23 12:12:44 compute-0 nova_compute[185173]:     </disk>
Jan 23 12:12:44 compute-0 nova_compute[185173]:     <interface type="ethernet">
Jan 23 12:12:44 compute-0 nova_compute[185173]:       <mac address="fa:16:3e:7b:31:0f"/>
Jan 23 12:12:44 compute-0 nova_compute[185173]:       <model type="virtio"/>
Jan 23 12:12:44 compute-0 nova_compute[185173]:       <driver name="vhost" rx_queue_size="512"/>
Jan 23 12:12:44 compute-0 nova_compute[185173]:       <mtu size="1442"/>
Jan 23 12:12:44 compute-0 nova_compute[185173]:       <target dev="tap97528895-56"/>
Jan 23 12:12:44 compute-0 nova_compute[185173]:     </interface>
Jan 23 12:12:44 compute-0 nova_compute[185173]:     <serial type="pty">
Jan 23 12:12:44 compute-0 nova_compute[185173]:       <log file="/var/lib/nova/instances/c471a51f-aa4e-4533-a6fa-9a4716ed23ec/console.log" append="off"/>
Jan 23 12:12:44 compute-0 nova_compute[185173]:     </serial>
Jan 23 12:12:44 compute-0 nova_compute[185173]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 23 12:12:44 compute-0 nova_compute[185173]:     <video>
Jan 23 12:12:44 compute-0 nova_compute[185173]:       <model type="virtio"/>
Jan 23 12:12:44 compute-0 nova_compute[185173]:     </video>
Jan 23 12:12:44 compute-0 nova_compute[185173]:     <input type="tablet" bus="usb"/>
Jan 23 12:12:44 compute-0 nova_compute[185173]:     <rng model="virtio">
Jan 23 12:12:44 compute-0 nova_compute[185173]:       <backend model="random">/dev/urandom</backend>
Jan 23 12:12:44 compute-0 nova_compute[185173]:     </rng>
Jan 23 12:12:44 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root"/>
Jan 23 12:12:44 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 12:12:44 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 12:12:44 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 12:12:44 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 12:12:44 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 12:12:44 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 12:12:44 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 12:12:44 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 12:12:44 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 12:12:44 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 12:12:44 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 12:12:44 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 12:12:44 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 12:12:44 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 12:12:44 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 12:12:44 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 12:12:44 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 12:12:44 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 12:12:44 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 12:12:44 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 12:12:44 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 12:12:44 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 12:12:44 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 12:12:44 compute-0 nova_compute[185173]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 12:12:44 compute-0 nova_compute[185173]:     <controller type="usb" index="0"/>
Jan 23 12:12:44 compute-0 nova_compute[185173]:     <memballoon model="virtio">
Jan 23 12:12:44 compute-0 nova_compute[185173]:       <stats period="10"/>
Jan 23 12:12:44 compute-0 nova_compute[185173]:     </memballoon>
Jan 23 12:12:44 compute-0 nova_compute[185173]:   </devices>
Jan 23 12:12:44 compute-0 nova_compute[185173]: </domain>
Jan 23 12:12:44 compute-0 nova_compute[185173]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 23 12:12:44 compute-0 nova_compute[185173]: 2026-01-23 12:12:44.598 185177 DEBUG nova.compute.manager [None req-fa14b7a0-c326-43e5-a6cb-d5f064ffa8f3 1eab809d0fb54c0aad115c1f8dbb943b cb61879fc5554da59f69b8ca9516ae29 - - default default] [instance: c471a51f-aa4e-4533-a6fa-9a4716ed23ec] Preparing to wait for external event network-vif-plugged-97528895-56d7-4fcd-b4aa-aff6b1af0155 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 23 12:12:44 compute-0 nova_compute[185173]: 2026-01-23 12:12:44.598 185177 DEBUG oslo_concurrency.lockutils [None req-fa14b7a0-c326-43e5-a6cb-d5f064ffa8f3 1eab809d0fb54c0aad115c1f8dbb943b cb61879fc5554da59f69b8ca9516ae29 - - default default] Acquiring lock "c471a51f-aa4e-4533-a6fa-9a4716ed23ec-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 12:12:44 compute-0 nova_compute[185173]: 2026-01-23 12:12:44.599 185177 DEBUG oslo_concurrency.lockutils [None req-fa14b7a0-c326-43e5-a6cb-d5f064ffa8f3 1eab809d0fb54c0aad115c1f8dbb943b cb61879fc5554da59f69b8ca9516ae29 - - default default] Lock "c471a51f-aa4e-4533-a6fa-9a4716ed23ec-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 12:12:44 compute-0 nova_compute[185173]: 2026-01-23 12:12:44.599 185177 DEBUG oslo_concurrency.lockutils [None req-fa14b7a0-c326-43e5-a6cb-d5f064ffa8f3 1eab809d0fb54c0aad115c1f8dbb943b cb61879fc5554da59f69b8ca9516ae29 - - default default] Lock "c471a51f-aa4e-4533-a6fa-9a4716ed23ec-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 12:12:44 compute-0 nova_compute[185173]: 2026-01-23 12:12:44.600 185177 DEBUG nova.virt.libvirt.vif [None req-fa14b7a0-c326-43e5-a6cb-d5f064ffa8f3 1eab809d0fb54c0aad115c1f8dbb943b cb61879fc5554da59f69b8ca9516ae29 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-23T12:12:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1686603901',display_name='tempest-ServersTestJSON-server-1686603901',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1686603901',id=7,image_ref='701e8d50-6f04-4dc4-b857-9ce72ee86552',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBARV7Qsnn5noWYkAfH1/R0v6YRz6i0V3zKYue4g9EFh+/mSMhNE90PZs0Gd5IWFMJ45aIBp7G+ZcSxXnIQIxk+0JErYjG6yNUMZw+LgAqxXqzrzGG+Zhyo3jWgYZuKHw2A==',key_name='tempest-keypair-765321846',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='cb61879fc5554da59f69b8ca9516ae29',ramdisk_id='',reservation_id='r-vgvzwdb9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='701e8d50-6f04-4dc4-b857-9ce72ee86552',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1812239947',owner_user_name='tempest-ServersTestJSON-1812239947-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T12:12:29Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1eab809d0fb54c0aad115c1f8dbb943b',uuid=c471a51f-aa4e-4533-a6fa-9a4716ed23ec,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "97528895-56d7-4fcd-b4aa-aff6b1af0155", "address": "fa:16:3e:7b:31:0f", "network": {"id": "6aefc98d-c645-43fb-8c17-03d341d4ab6a", "bridge": "br-int", "label": "tempest-ServersTestJSON-2095190510-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cb61879fc5554da59f69b8ca9516ae29", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97528895-56", "ovs_interfaceid": "97528895-56d7-4fcd-b4aa-aff6b1af0155", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 23 12:12:44 compute-0 nova_compute[185173]: 2026-01-23 12:12:44.601 185177 DEBUG nova.network.os_vif_util [None req-fa14b7a0-c326-43e5-a6cb-d5f064ffa8f3 1eab809d0fb54c0aad115c1f8dbb943b cb61879fc5554da59f69b8ca9516ae29 - - default default] Converting VIF {"id": "97528895-56d7-4fcd-b4aa-aff6b1af0155", "address": "fa:16:3e:7b:31:0f", "network": {"id": "6aefc98d-c645-43fb-8c17-03d341d4ab6a", "bridge": "br-int", "label": "tempest-ServersTestJSON-2095190510-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cb61879fc5554da59f69b8ca9516ae29", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97528895-56", "ovs_interfaceid": "97528895-56d7-4fcd-b4aa-aff6b1af0155", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 23 12:12:44 compute-0 nova_compute[185173]: 2026-01-23 12:12:44.601 185177 DEBUG nova.network.os_vif_util [None req-fa14b7a0-c326-43e5-a6cb-d5f064ffa8f3 1eab809d0fb54c0aad115c1f8dbb943b cb61879fc5554da59f69b8ca9516ae29 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7b:31:0f,bridge_name='br-int',has_traffic_filtering=True,id=97528895-56d7-4fcd-b4aa-aff6b1af0155,network=Network(6aefc98d-c645-43fb-8c17-03d341d4ab6a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap97528895-56') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 23 12:12:44 compute-0 nova_compute[185173]: 2026-01-23 12:12:44.602 185177 DEBUG os_vif [None req-fa14b7a0-c326-43e5-a6cb-d5f064ffa8f3 1eab809d0fb54c0aad115c1f8dbb943b cb61879fc5554da59f69b8ca9516ae29 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7b:31:0f,bridge_name='br-int',has_traffic_filtering=True,id=97528895-56d7-4fcd-b4aa-aff6b1af0155,network=Network(6aefc98d-c645-43fb-8c17-03d341d4ab6a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap97528895-56') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 23 12:12:44 compute-0 nova_compute[185173]: 2026-01-23 12:12:44.603 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:12:44 compute-0 nova_compute[185173]: 2026-01-23 12:12:44.603 185177 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 23 12:12:44 compute-0 nova_compute[185173]: 2026-01-23 12:12:44.604 185177 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 23 12:12:44 compute-0 nova_compute[185173]: 2026-01-23 12:12:44.607 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:12:44 compute-0 nova_compute[185173]: 2026-01-23 12:12:44.607 185177 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap97528895-56, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 23 12:12:44 compute-0 nova_compute[185173]: 2026-01-23 12:12:44.608 185177 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap97528895-56, col_values=(('external_ids', {'iface-id': '97528895-56d7-4fcd-b4aa-aff6b1af0155', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7b:31:0f', 'vm-uuid': 'c471a51f-aa4e-4533-a6fa-9a4716ed23ec'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 23 12:12:44 compute-0 NetworkManager[56133]: <info>  [1769170364.6113] manager: (tap97528895-56): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/37)
Jan 23 12:12:44 compute-0 nova_compute[185173]: 2026-01-23 12:12:44.612 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 23 12:12:44 compute-0 nova_compute[185173]: 2026-01-23 12:12:44.620 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:12:44 compute-0 nova_compute[185173]: 2026-01-23 12:12:44.622 185177 INFO os_vif [None req-fa14b7a0-c326-43e5-a6cb-d5f064ffa8f3 1eab809d0fb54c0aad115c1f8dbb943b cb61879fc5554da59f69b8ca9516ae29 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7b:31:0f,bridge_name='br-int',has_traffic_filtering=True,id=97528895-56d7-4fcd-b4aa-aff6b1af0155,network=Network(6aefc98d-c645-43fb-8c17-03d341d4ab6a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap97528895-56')
Jan 23 12:12:44 compute-0 systemd[1]: Starting libvirt proxy daemon...
Jan 23 12:12:44 compute-0 systemd[1]: Started libvirt proxy daemon.
Jan 23 12:12:44 compute-0 nova_compute[185173]: 2026-01-23 12:12:44.700 185177 DEBUG nova.virt.libvirt.driver [None req-fa14b7a0-c326-43e5-a6cb-d5f064ffa8f3 1eab809d0fb54c0aad115c1f8dbb943b cb61879fc5554da59f69b8ca9516ae29 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 23 12:12:44 compute-0 nova_compute[185173]: 2026-01-23 12:12:44.701 185177 DEBUG nova.virt.libvirt.driver [None req-fa14b7a0-c326-43e5-a6cb-d5f064ffa8f3 1eab809d0fb54c0aad115c1f8dbb943b cb61879fc5554da59f69b8ca9516ae29 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 23 12:12:44 compute-0 nova_compute[185173]: 2026-01-23 12:12:44.701 185177 DEBUG nova.virt.libvirt.driver [None req-fa14b7a0-c326-43e5-a6cb-d5f064ffa8f3 1eab809d0fb54c0aad115c1f8dbb943b cb61879fc5554da59f69b8ca9516ae29 - - default default] No VIF found with MAC fa:16:3e:7b:31:0f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 23 12:12:44 compute-0 nova_compute[185173]: 2026-01-23 12:12:44.702 185177 INFO nova.virt.libvirt.driver [None req-fa14b7a0-c326-43e5-a6cb-d5f064ffa8f3 1eab809d0fb54c0aad115c1f8dbb943b cb61879fc5554da59f69b8ca9516ae29 - - default default] [instance: c471a51f-aa4e-4533-a6fa-9a4716ed23ec] Using config drive
Jan 23 12:12:44 compute-0 podman[249283]: 2026-01-23 12:12:44.742856039 +0000 UTC m=+0.084699263 container health_status 99ee297e6e25b500e7af118e58bbafc761d2fd7202cdfcf4c976c2a99866b5ef (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 23 12:12:45 compute-0 nova_compute[185173]: 2026-01-23 12:12:45.330 185177 DEBUG nova.network.neutron [req-95c56e42-91bf-430e-9343-59be0c72caed req-7c10c2c0-7e86-415d-9830-e368fa030d67 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] [instance: 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7] Updated VIF entry in instance network info cache for port d9faf41e-a824-421e-81f1-bbae06da88f5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 23 12:12:45 compute-0 nova_compute[185173]: 2026-01-23 12:12:45.331 185177 DEBUG nova.network.neutron [req-95c56e42-91bf-430e-9343-59be0c72caed req-7c10c2c0-7e86-415d-9830-e368fa030d67 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] [instance: 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7] Updating instance_info_cache with network_info: [{"id": "d9faf41e-a824-421e-81f1-bbae06da88f5", "address": "fa:16:3e:61:28:24", "network": {"id": "4769a004-5d6e-4d38-99cf-f49693959900", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1719223511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "219dee4c2af34d05ac6e31aa65c35134", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9faf41e-a8", "ovs_interfaceid": "d9faf41e-a824-421e-81f1-bbae06da88f5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 23 12:12:45 compute-0 nova_compute[185173]: 2026-01-23 12:12:45.357 185177 DEBUG oslo_concurrency.lockutils [req-95c56e42-91bf-430e-9343-59be0c72caed req-7c10c2c0-7e86-415d-9830-e368fa030d67 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] Releasing lock "refresh_cache-9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 23 12:12:45 compute-0 nova_compute[185173]: 2026-01-23 12:12:45.736 185177 DEBUG nova.compute.manager [req-ee7481d5-f7eb-4072-a676-158b80b91c57 req-e1ff300b-1262-460f-b06f-e4d127ad90ef e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] [instance: 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7] Received event network-vif-plugged-d9faf41e-a824-421e-81f1-bbae06da88f5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 23 12:12:45 compute-0 nova_compute[185173]: 2026-01-23 12:12:45.737 185177 DEBUG oslo_concurrency.lockutils [req-ee7481d5-f7eb-4072-a676-158b80b91c57 req-e1ff300b-1262-460f-b06f-e4d127ad90ef e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] Acquiring lock "9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 12:12:45 compute-0 nova_compute[185173]: 2026-01-23 12:12:45.737 185177 DEBUG oslo_concurrency.lockutils [req-ee7481d5-f7eb-4072-a676-158b80b91c57 req-e1ff300b-1262-460f-b06f-e4d127ad90ef e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] Lock "9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 12:12:45 compute-0 nova_compute[185173]: 2026-01-23 12:12:45.738 185177 DEBUG oslo_concurrency.lockutils [req-ee7481d5-f7eb-4072-a676-158b80b91c57 req-e1ff300b-1262-460f-b06f-e4d127ad90ef e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] Lock "9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 12:12:45 compute-0 nova_compute[185173]: 2026-01-23 12:12:45.738 185177 DEBUG nova.compute.manager [req-ee7481d5-f7eb-4072-a676-158b80b91c57 req-e1ff300b-1262-460f-b06f-e4d127ad90ef e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] [instance: 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7] No waiting events found dispatching network-vif-plugged-d9faf41e-a824-421e-81f1-bbae06da88f5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 23 12:12:45 compute-0 nova_compute[185173]: 2026-01-23 12:12:45.738 185177 WARNING nova.compute.manager [req-ee7481d5-f7eb-4072-a676-158b80b91c57 req-e1ff300b-1262-460f-b06f-e4d127ad90ef e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] [instance: 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7] Received unexpected event network-vif-plugged-d9faf41e-a824-421e-81f1-bbae06da88f5 for instance with vm_state active and task_state None.
Jan 23 12:12:46 compute-0 nova_compute[185173]: 2026-01-23 12:12:46.162 185177 INFO nova.virt.libvirt.driver [None req-fa14b7a0-c326-43e5-a6cb-d5f064ffa8f3 1eab809d0fb54c0aad115c1f8dbb943b cb61879fc5554da59f69b8ca9516ae29 - - default default] [instance: c471a51f-aa4e-4533-a6fa-9a4716ed23ec] Creating config drive at /var/lib/nova/instances/c471a51f-aa4e-4533-a6fa-9a4716ed23ec/disk.config
Jan 23 12:12:46 compute-0 nova_compute[185173]: 2026-01-23 12:12:46.175 185177 DEBUG oslo_concurrency.processutils [None req-fa14b7a0-c326-43e5-a6cb-d5f064ffa8f3 1eab809d0fb54c0aad115c1f8dbb943b cb61879fc5554da59f69b8ca9516ae29 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c471a51f-aa4e-4533-a6fa-9a4716ed23ec/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppmbtqgcu execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 12:12:46 compute-0 nova_compute[185173]: 2026-01-23 12:12:46.305 185177 DEBUG oslo_concurrency.processutils [None req-fa14b7a0-c326-43e5-a6cb-d5f064ffa8f3 1eab809d0fb54c0aad115c1f8dbb943b cb61879fc5554da59f69b8ca9516ae29 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c471a51f-aa4e-4533-a6fa-9a4716ed23ec/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppmbtqgcu" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 12:12:46 compute-0 kernel: tap97528895-56: entered promiscuous mode
Jan 23 12:12:46 compute-0 ovn_controller[97581]: 2026-01-23T12:12:46Z|00071|binding|INFO|Claiming lport 97528895-56d7-4fcd-b4aa-aff6b1af0155 for this chassis.
Jan 23 12:12:46 compute-0 ovn_controller[97581]: 2026-01-23T12:12:46Z|00072|binding|INFO|97528895-56d7-4fcd-b4aa-aff6b1af0155: Claiming fa:16:3e:7b:31:0f 10.100.0.9
Jan 23 12:12:46 compute-0 nova_compute[185173]: 2026-01-23 12:12:46.360 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:12:46 compute-0 nova_compute[185173]: 2026-01-23 12:12:46.362 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:12:46 compute-0 NetworkManager[56133]: <info>  [1769170366.3697] manager: (tap97528895-56): new Tun device (/org/freedesktop/NetworkManager/Devices/38)
Jan 23 12:12:46 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:12:46.385 106832 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7b:31:0f 10.100.0.9'], port_security=['fa:16:3e:7b:31:0f 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'c471a51f-aa4e-4533-a6fa-9a4716ed23ec', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6aefc98d-c645-43fb-8c17-03d341d4ab6a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cb61879fc5554da59f69b8ca9516ae29', 'neutron:revision_number': '2', 'neutron:security_group_ids': '20c4407d-d9a6-4ecc-97ee-228b43dcca88', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=49648ccd-57b0-4876-96c0-cf7bc10ed8b3, chassis=[<ovs.db.idl.Row object at 0x7fceaba80790>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fceaba80790>], logical_port=97528895-56d7-4fcd-b4aa-aff6b1af0155) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 23 12:12:46 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:12:46.386 106832 INFO neutron.agent.ovn.metadata.agent [-] Port 97528895-56d7-4fcd-b4aa-aff6b1af0155 in datapath 6aefc98d-c645-43fb-8c17-03d341d4ab6a bound to our chassis
Jan 23 12:12:46 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:12:46.388 106832 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6aefc98d-c645-43fb-8c17-03d341d4ab6a
Jan 23 12:12:46 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:12:46.402 238267 DEBUG oslo.privsep.daemon [-] privsep: reply[df5a3f12-1d41-458b-8af7-db3a17f7adf1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 12:12:46 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:12:46.403 106832 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6aefc98d-c1 in ovnmeta-6aefc98d-c645-43fb-8c17-03d341d4ab6a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 23 12:12:46 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:12:46.405 238267 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6aefc98d-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 23 12:12:46 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:12:46.405 238267 DEBUG oslo.privsep.daemon [-] privsep: reply[326b1e42-ec0c-4551-bc4e-c841af69cd66]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 12:12:46 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:12:46.406 238267 DEBUG oslo.privsep.daemon [-] privsep: reply[09487783-17cb-4ecc-9aef-835969f85939]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 12:12:46 compute-0 nova_compute[185173]: 2026-01-23 12:12:46.419 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:12:46 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:12:46.420 107372 DEBUG oslo.privsep.daemon [-] privsep: reply[4b5ee1e5-494f-4832-b1c9-c59ac8a73066]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 12:12:46 compute-0 ovn_controller[97581]: 2026-01-23T12:12:46Z|00073|binding|INFO|Setting lport 97528895-56d7-4fcd-b4aa-aff6b1af0155 ovn-installed in OVS
Jan 23 12:12:46 compute-0 ovn_controller[97581]: 2026-01-23T12:12:46Z|00074|binding|INFO|Setting lport 97528895-56d7-4fcd-b4aa-aff6b1af0155 up in Southbound
Jan 23 12:12:46 compute-0 systemd-machined[156550]: New machine qemu-7-instance-00000007.
Jan 23 12:12:46 compute-0 nova_compute[185173]: 2026-01-23 12:12:46.428 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:12:46 compute-0 systemd[1]: Started Virtual Machine qemu-7-instance-00000007.
Jan 23 12:12:46 compute-0 systemd-udevd[249345]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 12:12:46 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:12:46.449 238267 DEBUG oslo.privsep.daemon [-] privsep: reply[3d377b7e-5a99-473f-8df6-044966ab72f6]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 12:12:46 compute-0 NetworkManager[56133]: <info>  [1769170366.4657] device (tap97528895-56): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 23 12:12:46 compute-0 NetworkManager[56133]: <info>  [1769170366.4703] device (tap97528895-56): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 23 12:12:46 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:12:46.483 238300 DEBUG oslo.privsep.daemon [-] privsep: reply[0b5cb620-a3e7-4cb5-98c6-2a6f2242f1b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 12:12:46 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:12:46.490 238267 DEBUG oslo.privsep.daemon [-] privsep: reply[471f84eb-76db-45e2-a98a-78c8329c97ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 12:12:46 compute-0 NetworkManager[56133]: <info>  [1769170366.4945] manager: (tap6aefc98d-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/39)
Jan 23 12:12:46 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:12:46.523 238300 DEBUG oslo.privsep.daemon [-] privsep: reply[8d695a6d-3daf-41e9-9b73-661092259c49]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 12:12:46 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:12:46.528 238300 DEBUG oslo.privsep.daemon [-] privsep: reply[4a15f650-6731-4d82-a09e-7fb60137b0cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 12:12:46 compute-0 NetworkManager[56133]: <info>  [1769170366.5588] device (tap6aefc98d-c0): carrier: link connected
Jan 23 12:12:46 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:12:46.564 238300 DEBUG oslo.privsep.daemon [-] privsep: reply[b4df9a3b-d0ac-452b-817c-3a1014c7d1e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 12:12:46 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:12:46.588 238267 DEBUG oslo.privsep.daemon [-] privsep: reply[ab5f973f-dd14-4854-b8bf-fc0ffd6717de]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6aefc98d-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b9:2a:78'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 23], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 528718, 'reachable_time': 30151, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 249375, 'error': None, 'target': 'ovnmeta-6aefc98d-c645-43fb-8c17-03d341d4ab6a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 12:12:46 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:12:46.609 238267 DEBUG oslo.privsep.daemon [-] privsep: reply[c547c04e-d14c-4cc1-a536-7244b0bb1bfb]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb9:2a78'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 528718, 'tstamp': 528718}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 249376, 'error': None, 'target': 'ovnmeta-6aefc98d-c645-43fb-8c17-03d341d4ab6a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 12:12:46 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:12:46.633 238267 DEBUG oslo.privsep.daemon [-] privsep: reply[50c056a5-9999-46a9-b0a7-baaed2f46f26]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6aefc98d-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b9:2a:78'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 23], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 528718, 'reachable_time': 30151, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 249377, 'error': None, 'target': 'ovnmeta-6aefc98d-c645-43fb-8c17-03d341d4ab6a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 12:12:46 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:12:46.665 238267 DEBUG oslo.privsep.daemon [-] privsep: reply[3fb0be14-8092-4184-ad03-51553a957ff8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 12:12:46 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:12:46.763 238267 DEBUG oslo.privsep.daemon [-] privsep: reply[5491da83-10a0-407e-87a2-0ee3ba75fb3e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 12:12:46 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:12:46.770 106832 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6aefc98d-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 23 12:12:46 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:12:46.771 106832 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 23 12:12:46 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:12:46.772 106832 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6aefc98d-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 23 12:12:46 compute-0 nova_compute[185173]: 2026-01-23 12:12:46.773 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:12:46 compute-0 kernel: tap6aefc98d-c0: entered promiscuous mode
Jan 23 12:12:46 compute-0 NetworkManager[56133]: <info>  [1769170366.7760] manager: (tap6aefc98d-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/40)
Jan 23 12:12:46 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:12:46.776 106832 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6aefc98d-c0, col_values=(('external_ids', {'iface-id': '3ff9aeaa-ee49-4944-b05b-84916e5e2516'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 23 12:12:46 compute-0 nova_compute[185173]: 2026-01-23 12:12:46.778 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:12:46 compute-0 nova_compute[185173]: 2026-01-23 12:12:46.781 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:12:46 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:12:46.782 106832 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6aefc98d-c645-43fb-8c17-03d341d4ab6a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6aefc98d-c645-43fb-8c17-03d341d4ab6a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 23 12:12:46 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:12:46.783 238267 DEBUG oslo.privsep.daemon [-] privsep: reply[c1475f0e-da75-49ae-9dea-25f0a0d5b983]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 12:12:46 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:12:46.784 106832 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 23 12:12:46 compute-0 ovn_metadata_agent[106827]: global
Jan 23 12:12:46 compute-0 ovn_metadata_agent[106827]:     log         /dev/log local0 debug
Jan 23 12:12:46 compute-0 ovn_metadata_agent[106827]:     log-tag     haproxy-metadata-proxy-6aefc98d-c645-43fb-8c17-03d341d4ab6a
Jan 23 12:12:46 compute-0 ovn_metadata_agent[106827]:     user        root
Jan 23 12:12:46 compute-0 ovn_metadata_agent[106827]:     group       root
Jan 23 12:12:46 compute-0 ovn_metadata_agent[106827]:     maxconn     1024
Jan 23 12:12:46 compute-0 ovn_metadata_agent[106827]:     pidfile     /var/lib/neutron/external/pids/6aefc98d-c645-43fb-8c17-03d341d4ab6a.pid.haproxy
Jan 23 12:12:46 compute-0 ovn_metadata_agent[106827]:     daemon
Jan 23 12:12:46 compute-0 ovn_metadata_agent[106827]: 
Jan 23 12:12:46 compute-0 ovn_metadata_agent[106827]: defaults
Jan 23 12:12:46 compute-0 ovn_metadata_agent[106827]:     log global
Jan 23 12:12:46 compute-0 ovn_metadata_agent[106827]:     mode http
Jan 23 12:12:46 compute-0 ovn_metadata_agent[106827]:     option httplog
Jan 23 12:12:46 compute-0 ovn_metadata_agent[106827]:     option dontlognull
Jan 23 12:12:46 compute-0 ovn_metadata_agent[106827]:     option http-server-close
Jan 23 12:12:46 compute-0 ovn_metadata_agent[106827]:     option forwardfor
Jan 23 12:12:46 compute-0 ovn_metadata_agent[106827]:     retries                 3
Jan 23 12:12:46 compute-0 ovn_metadata_agent[106827]:     timeout http-request    30s
Jan 23 12:12:46 compute-0 ovn_metadata_agent[106827]:     timeout connect         30s
Jan 23 12:12:46 compute-0 ovn_metadata_agent[106827]:     timeout client          32s
Jan 23 12:12:46 compute-0 ovn_metadata_agent[106827]:     timeout server          32s
Jan 23 12:12:46 compute-0 ovn_metadata_agent[106827]:     timeout http-keep-alive 30s
Jan 23 12:12:46 compute-0 ovn_metadata_agent[106827]: 
Jan 23 12:12:46 compute-0 ovn_metadata_agent[106827]: 
Jan 23 12:12:46 compute-0 ovn_metadata_agent[106827]: listen listener
Jan 23 12:12:46 compute-0 ovn_metadata_agent[106827]:     bind 169.254.169.254:80
Jan 23 12:12:46 compute-0 ovn_metadata_agent[106827]:     server metadata /var/lib/neutron/metadata_proxy
Jan 23 12:12:46 compute-0 ovn_metadata_agent[106827]:     http-request add-header X-OVN-Network-ID 6aefc98d-c645-43fb-8c17-03d341d4ab6a
Jan 23 12:12:46 compute-0 ovn_metadata_agent[106827]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 23 12:12:46 compute-0 ovn_controller[97581]: 2026-01-23T12:12:46Z|00075|binding|INFO|Releasing lport 3ff9aeaa-ee49-4944-b05b-84916e5e2516 from this chassis (sb_readonly=0)
Jan 23 12:12:46 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:12:46.785 106832 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6aefc98d-c645-43fb-8c17-03d341d4ab6a', 'env', 'PROCESS_TAG=haproxy-6aefc98d-c645-43fb-8c17-03d341d4ab6a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6aefc98d-c645-43fb-8c17-03d341d4ab6a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 23 12:12:46 compute-0 nova_compute[185173]: 2026-01-23 12:12:46.822 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:12:47 compute-0 nova_compute[185173]: 2026-01-23 12:12:47.016 185177 DEBUG nova.virt.driver [None req-161a4fc3-746d-453a-ae54-40285a29550b - - - - - -] Emitting event <LifecycleEvent: 1769170367.0160859, c471a51f-aa4e-4533-a6fa-9a4716ed23ec => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 23 12:12:47 compute-0 nova_compute[185173]: 2026-01-23 12:12:47.017 185177 INFO nova.compute.manager [None req-161a4fc3-746d-453a-ae54-40285a29550b - - - - - -] [instance: c471a51f-aa4e-4533-a6fa-9a4716ed23ec] VM Started (Lifecycle Event)
Jan 23 12:12:47 compute-0 nova_compute[185173]: 2026-01-23 12:12:47.038 185177 DEBUG nova.compute.manager [None req-161a4fc3-746d-453a-ae54-40285a29550b - - - - - -] [instance: c471a51f-aa4e-4533-a6fa-9a4716ed23ec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 23 12:12:47 compute-0 nova_compute[185173]: 2026-01-23 12:12:47.044 185177 DEBUG nova.virt.driver [None req-161a4fc3-746d-453a-ae54-40285a29550b - - - - - -] Emitting event <LifecycleEvent: 1769170367.0162191, c471a51f-aa4e-4533-a6fa-9a4716ed23ec => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 23 12:12:47 compute-0 nova_compute[185173]: 2026-01-23 12:12:47.044 185177 INFO nova.compute.manager [None req-161a4fc3-746d-453a-ae54-40285a29550b - - - - - -] [instance: c471a51f-aa4e-4533-a6fa-9a4716ed23ec] VM Paused (Lifecycle Event)
Jan 23 12:12:47 compute-0 nova_compute[185173]: 2026-01-23 12:12:47.085 185177 DEBUG nova.compute.manager [None req-161a4fc3-746d-453a-ae54-40285a29550b - - - - - -] [instance: c471a51f-aa4e-4533-a6fa-9a4716ed23ec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 23 12:12:47 compute-0 nova_compute[185173]: 2026-01-23 12:12:47.090 185177 DEBUG nova.compute.manager [None req-161a4fc3-746d-453a-ae54-40285a29550b - - - - - -] [instance: c471a51f-aa4e-4533-a6fa-9a4716ed23ec] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 23 12:12:47 compute-0 nova_compute[185173]: 2026-01-23 12:12:47.111 185177 INFO nova.compute.manager [None req-161a4fc3-746d-453a-ae54-40285a29550b - - - - - -] [instance: c471a51f-aa4e-4533-a6fa-9a4716ed23ec] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 23 12:12:47 compute-0 podman[249414]: 2026-01-23 12:12:47.186938672 +0000 UTC m=+0.051301096 container create e4bbe95ca0a437641bbde8fc7ebf1e5766b1c79772ec7984e39c2cbd50ae15aa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6aefc98d-c645-43fb-8c17-03d341d4ab6a, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Jan 23 12:12:47 compute-0 systemd[1]: Started libpod-conmon-e4bbe95ca0a437641bbde8fc7ebf1e5766b1c79772ec7984e39c2cbd50ae15aa.scope.
Jan 23 12:12:47 compute-0 nova_compute[185173]: 2026-01-23 12:12:47.230 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:12:47 compute-0 systemd[1]: Started libcrun container.
Jan 23 12:12:47 compute-0 podman[249414]: 2026-01-23 12:12:47.160155381 +0000 UTC m=+0.024517835 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 23 12:12:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98a05be9ae018304db06664e53d91e3ed7c2139b7b633c0ad3584203789adf30/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 23 12:12:47 compute-0 podman[249414]: 2026-01-23 12:12:47.283979854 +0000 UTC m=+0.148342368 container init e4bbe95ca0a437641bbde8fc7ebf1e5766b1c79772ec7984e39c2cbd50ae15aa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6aefc98d-c645-43fb-8c17-03d341d4ab6a, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 23 12:12:47 compute-0 podman[249414]: 2026-01-23 12:12:47.290869696 +0000 UTC m=+0.155232160 container start e4bbe95ca0a437641bbde8fc7ebf1e5766b1c79772ec7984e39c2cbd50ae15aa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6aefc98d-c645-43fb-8c17-03d341d4ab6a, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 23 12:12:47 compute-0 neutron-haproxy-ovnmeta-6aefc98d-c645-43fb-8c17-03d341d4ab6a[249429]: [NOTICE]   (249433) : New worker (249435) forked
Jan 23 12:12:47 compute-0 neutron-haproxy-ovnmeta-6aefc98d-c645-43fb-8c17-03d341d4ab6a[249429]: [NOTICE]   (249433) : Loading success.
Jan 23 12:12:48 compute-0 nova_compute[185173]: 2026-01-23 12:12:48.034 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:12:48 compute-0 nova_compute[185173]: 2026-01-23 12:12:48.827 185177 DEBUG nova.network.neutron [req-aa30962c-f9c3-416e-9757-073670c30a64 req-5bf8b210-37c4-45fb-a00b-ba1f73bd98c4 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] [instance: c471a51f-aa4e-4533-a6fa-9a4716ed23ec] Updated VIF entry in instance network info cache for port 97528895-56d7-4fcd-b4aa-aff6b1af0155. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 23 12:12:48 compute-0 nova_compute[185173]: 2026-01-23 12:12:48.829 185177 DEBUG nova.network.neutron [req-aa30962c-f9c3-416e-9757-073670c30a64 req-5bf8b210-37c4-45fb-a00b-ba1f73bd98c4 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] [instance: c471a51f-aa4e-4533-a6fa-9a4716ed23ec] Updating instance_info_cache with network_info: [{"id": "97528895-56d7-4fcd-b4aa-aff6b1af0155", "address": "fa:16:3e:7b:31:0f", "network": {"id": "6aefc98d-c645-43fb-8c17-03d341d4ab6a", "bridge": "br-int", "label": "tempest-ServersTestJSON-2095190510-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cb61879fc5554da59f69b8ca9516ae29", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97528895-56", "ovs_interfaceid": "97528895-56d7-4fcd-b4aa-aff6b1af0155", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 23 12:12:48 compute-0 nova_compute[185173]: 2026-01-23 12:12:48.961 185177 DEBUG oslo_concurrency.lockutils [req-aa30962c-f9c3-416e-9757-073670c30a64 req-5bf8b210-37c4-45fb-a00b-ba1f73bd98c4 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] Releasing lock "refresh_cache-c471a51f-aa4e-4533-a6fa-9a4716ed23ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 23 12:12:48 compute-0 nova_compute[185173]: 2026-01-23 12:12:48.974 185177 DEBUG nova.compute.manager [req-1269fc76-0080-4943-b5de-d0131d1f333a req-df884da6-65ec-431f-95fa-02d4af60f04f e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] [instance: c471a51f-aa4e-4533-a6fa-9a4716ed23ec] Received event network-vif-plugged-97528895-56d7-4fcd-b4aa-aff6b1af0155 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 23 12:12:48 compute-0 nova_compute[185173]: 2026-01-23 12:12:48.975 185177 DEBUG oslo_concurrency.lockutils [req-1269fc76-0080-4943-b5de-d0131d1f333a req-df884da6-65ec-431f-95fa-02d4af60f04f e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] Acquiring lock "c471a51f-aa4e-4533-a6fa-9a4716ed23ec-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 12:12:48 compute-0 nova_compute[185173]: 2026-01-23 12:12:48.976 185177 DEBUG oslo_concurrency.lockutils [req-1269fc76-0080-4943-b5de-d0131d1f333a req-df884da6-65ec-431f-95fa-02d4af60f04f e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] Lock "c471a51f-aa4e-4533-a6fa-9a4716ed23ec-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 12:12:48 compute-0 nova_compute[185173]: 2026-01-23 12:12:48.977 185177 DEBUG oslo_concurrency.lockutils [req-1269fc76-0080-4943-b5de-d0131d1f333a req-df884da6-65ec-431f-95fa-02d4af60f04f e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] Lock "c471a51f-aa4e-4533-a6fa-9a4716ed23ec-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 12:12:48 compute-0 nova_compute[185173]: 2026-01-23 12:12:48.978 185177 DEBUG nova.compute.manager [req-1269fc76-0080-4943-b5de-d0131d1f333a req-df884da6-65ec-431f-95fa-02d4af60f04f e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] [instance: c471a51f-aa4e-4533-a6fa-9a4716ed23ec] Processing event network-vif-plugged-97528895-56d7-4fcd-b4aa-aff6b1af0155 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 23 12:12:48 compute-0 nova_compute[185173]: 2026-01-23 12:12:48.980 185177 DEBUG nova.compute.manager [None req-fa14b7a0-c326-43e5-a6cb-d5f064ffa8f3 1eab809d0fb54c0aad115c1f8dbb943b cb61879fc5554da59f69b8ca9516ae29 - - default default] [instance: c471a51f-aa4e-4533-a6fa-9a4716ed23ec] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 23 12:12:48 compute-0 nova_compute[185173]: 2026-01-23 12:12:48.986 185177 DEBUG nova.virt.driver [None req-161a4fc3-746d-453a-ae54-40285a29550b - - - - - -] Emitting event <LifecycleEvent: 1769170368.985715, c471a51f-aa4e-4533-a6fa-9a4716ed23ec => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 23 12:12:48 compute-0 nova_compute[185173]: 2026-01-23 12:12:48.987 185177 INFO nova.compute.manager [None req-161a4fc3-746d-453a-ae54-40285a29550b - - - - - -] [instance: c471a51f-aa4e-4533-a6fa-9a4716ed23ec] VM Resumed (Lifecycle Event)
Jan 23 12:12:48 compute-0 nova_compute[185173]: 2026-01-23 12:12:48.991 185177 DEBUG nova.virt.libvirt.driver [None req-fa14b7a0-c326-43e5-a6cb-d5f064ffa8f3 1eab809d0fb54c0aad115c1f8dbb943b cb61879fc5554da59f69b8ca9516ae29 - - default default] [instance: c471a51f-aa4e-4533-a6fa-9a4716ed23ec] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 23 12:12:49 compute-0 nova_compute[185173]: 2026-01-23 12:12:49.000 185177 INFO nova.virt.libvirt.driver [-] [instance: c471a51f-aa4e-4533-a6fa-9a4716ed23ec] Instance spawned successfully.
Jan 23 12:12:49 compute-0 nova_compute[185173]: 2026-01-23 12:12:49.001 185177 DEBUG nova.virt.libvirt.driver [None req-fa14b7a0-c326-43e5-a6cb-d5f064ffa8f3 1eab809d0fb54c0aad115c1f8dbb943b cb61879fc5554da59f69b8ca9516ae29 - - default default] [instance: c471a51f-aa4e-4533-a6fa-9a4716ed23ec] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 23 12:12:49 compute-0 nova_compute[185173]: 2026-01-23 12:12:49.018 185177 DEBUG nova.compute.manager [None req-161a4fc3-746d-453a-ae54-40285a29550b - - - - - -] [instance: c471a51f-aa4e-4533-a6fa-9a4716ed23ec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 23 12:12:49 compute-0 nova_compute[185173]: 2026-01-23 12:12:49.035 185177 DEBUG nova.compute.manager [None req-161a4fc3-746d-453a-ae54-40285a29550b - - - - - -] [instance: c471a51f-aa4e-4533-a6fa-9a4716ed23ec] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 23 12:12:49 compute-0 nova_compute[185173]: 2026-01-23 12:12:49.041 185177 DEBUG nova.virt.libvirt.driver [None req-fa14b7a0-c326-43e5-a6cb-d5f064ffa8f3 1eab809d0fb54c0aad115c1f8dbb943b cb61879fc5554da59f69b8ca9516ae29 - - default default] [instance: c471a51f-aa4e-4533-a6fa-9a4716ed23ec] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 23 12:12:49 compute-0 nova_compute[185173]: 2026-01-23 12:12:49.042 185177 DEBUG nova.virt.libvirt.driver [None req-fa14b7a0-c326-43e5-a6cb-d5f064ffa8f3 1eab809d0fb54c0aad115c1f8dbb943b cb61879fc5554da59f69b8ca9516ae29 - - default default] [instance: c471a51f-aa4e-4533-a6fa-9a4716ed23ec] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 23 12:12:49 compute-0 nova_compute[185173]: 2026-01-23 12:12:49.044 185177 DEBUG nova.virt.libvirt.driver [None req-fa14b7a0-c326-43e5-a6cb-d5f064ffa8f3 1eab809d0fb54c0aad115c1f8dbb943b cb61879fc5554da59f69b8ca9516ae29 - - default default] [instance: c471a51f-aa4e-4533-a6fa-9a4716ed23ec] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 23 12:12:49 compute-0 nova_compute[185173]: 2026-01-23 12:12:49.045 185177 DEBUG nova.virt.libvirt.driver [None req-fa14b7a0-c326-43e5-a6cb-d5f064ffa8f3 1eab809d0fb54c0aad115c1f8dbb943b cb61879fc5554da59f69b8ca9516ae29 - - default default] [instance: c471a51f-aa4e-4533-a6fa-9a4716ed23ec] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 23 12:12:49 compute-0 nova_compute[185173]: 2026-01-23 12:12:49.047 185177 DEBUG nova.virt.libvirt.driver [None req-fa14b7a0-c326-43e5-a6cb-d5f064ffa8f3 1eab809d0fb54c0aad115c1f8dbb943b cb61879fc5554da59f69b8ca9516ae29 - - default default] [instance: c471a51f-aa4e-4533-a6fa-9a4716ed23ec] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 23 12:12:49 compute-0 nova_compute[185173]: 2026-01-23 12:12:49.048 185177 DEBUG nova.virt.libvirt.driver [None req-fa14b7a0-c326-43e5-a6cb-d5f064ffa8f3 1eab809d0fb54c0aad115c1f8dbb943b cb61879fc5554da59f69b8ca9516ae29 - - default default] [instance: c471a51f-aa4e-4533-a6fa-9a4716ed23ec] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 23 12:12:49 compute-0 nova_compute[185173]: 2026-01-23 12:12:49.063 185177 INFO nova.compute.manager [None req-161a4fc3-746d-453a-ae54-40285a29550b - - - - - -] [instance: c471a51f-aa4e-4533-a6fa-9a4716ed23ec] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 23 12:12:49 compute-0 nova_compute[185173]: 2026-01-23 12:12:49.160 185177 INFO nova.compute.manager [None req-fa14b7a0-c326-43e5-a6cb-d5f064ffa8f3 1eab809d0fb54c0aad115c1f8dbb943b cb61879fc5554da59f69b8ca9516ae29 - - default default] [instance: c471a51f-aa4e-4533-a6fa-9a4716ed23ec] Took 19.48 seconds to spawn the instance on the hypervisor.
Jan 23 12:12:49 compute-0 nova_compute[185173]: 2026-01-23 12:12:49.161 185177 DEBUG nova.compute.manager [None req-fa14b7a0-c326-43e5-a6cb-d5f064ffa8f3 1eab809d0fb54c0aad115c1f8dbb943b cb61879fc5554da59f69b8ca9516ae29 - - default default] [instance: c471a51f-aa4e-4533-a6fa-9a4716ed23ec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 23 12:12:49 compute-0 nova_compute[185173]: 2026-01-23 12:12:49.272 185177 INFO nova.compute.manager [None req-fa14b7a0-c326-43e5-a6cb-d5f064ffa8f3 1eab809d0fb54c0aad115c1f8dbb943b cb61879fc5554da59f69b8ca9516ae29 - - default default] [instance: c471a51f-aa4e-4533-a6fa-9a4716ed23ec] Took 20.39 seconds to build instance.
Jan 23 12:12:49 compute-0 nova_compute[185173]: 2026-01-23 12:12:49.302 185177 DEBUG oslo_concurrency.lockutils [None req-fa14b7a0-c326-43e5-a6cb-d5f064ffa8f3 1eab809d0fb54c0aad115c1f8dbb943b cb61879fc5554da59f69b8ca9516ae29 - - default default] Lock "c471a51f-aa4e-4533-a6fa-9a4716ed23ec" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 20.476s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 12:12:49 compute-0 nova_compute[185173]: 2026-01-23 12:12:49.612 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:12:50 compute-0 NetworkManager[56133]: <info>  [1769170370.5701] manager: (patch-br-int-to-provnet-1ca53fac-c1a6-45fe-a18c-749fb9a851a1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/41)
Jan 23 12:12:50 compute-0 NetworkManager[56133]: <info>  [1769170370.5733] manager: (patch-provnet-1ca53fac-c1a6-45fe-a18c-749fb9a851a1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/42)
Jan 23 12:12:50 compute-0 nova_compute[185173]: 2026-01-23 12:12:50.573 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:12:50 compute-0 nova_compute[185173]: 2026-01-23 12:12:50.669 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:12:50 compute-0 ovn_controller[97581]: 2026-01-23T12:12:50Z|00076|binding|INFO|Releasing lport 3ff9aeaa-ee49-4944-b05b-84916e5e2516 from this chassis (sb_readonly=0)
Jan 23 12:12:50 compute-0 ovn_controller[97581]: 2026-01-23T12:12:50Z|00077|binding|INFO|Releasing lport 9cbf67d5-0442-4a05-87a4-97f78502296a from this chassis (sb_readonly=0)
Jan 23 12:12:50 compute-0 nova_compute[185173]: 2026-01-23 12:12:50.696 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:12:51 compute-0 podman[249446]: 2026-01-23 12:12:51.793647296 +0000 UTC m=+0.114534511 container health_status cde20f10ae383cce1365a41265bac0a75ea71c31a21a1539f187bef9d678e8d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, container_name=openstack_network_exporter, name=ubi9-minimal, architecture=x86_64, io.openshift.expose-services=, distribution-scope=public, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git)
Jan 23 12:12:52 compute-0 nova_compute[185173]: 2026-01-23 12:12:52.590 185177 DEBUG nova.compute.manager [req-2729d492-fbfe-489b-9477-ac794ef135f6 req-5d35eaa1-db74-40e3-b306-b7592e2fa417 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] [instance: c471a51f-aa4e-4533-a6fa-9a4716ed23ec] Received event network-vif-plugged-97528895-56d7-4fcd-b4aa-aff6b1af0155 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 23 12:12:52 compute-0 nova_compute[185173]: 2026-01-23 12:12:52.591 185177 DEBUG oslo_concurrency.lockutils [req-2729d492-fbfe-489b-9477-ac794ef135f6 req-5d35eaa1-db74-40e3-b306-b7592e2fa417 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] Acquiring lock "c471a51f-aa4e-4533-a6fa-9a4716ed23ec-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 12:12:52 compute-0 nova_compute[185173]: 2026-01-23 12:12:52.592 185177 DEBUG oslo_concurrency.lockutils [req-2729d492-fbfe-489b-9477-ac794ef135f6 req-5d35eaa1-db74-40e3-b306-b7592e2fa417 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] Lock "c471a51f-aa4e-4533-a6fa-9a4716ed23ec-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 12:12:52 compute-0 nova_compute[185173]: 2026-01-23 12:12:52.593 185177 DEBUG oslo_concurrency.lockutils [req-2729d492-fbfe-489b-9477-ac794ef135f6 req-5d35eaa1-db74-40e3-b306-b7592e2fa417 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] Lock "c471a51f-aa4e-4533-a6fa-9a4716ed23ec-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 12:12:52 compute-0 nova_compute[185173]: 2026-01-23 12:12:52.593 185177 DEBUG nova.compute.manager [req-2729d492-fbfe-489b-9477-ac794ef135f6 req-5d35eaa1-db74-40e3-b306-b7592e2fa417 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] [instance: c471a51f-aa4e-4533-a6fa-9a4716ed23ec] No waiting events found dispatching network-vif-plugged-97528895-56d7-4fcd-b4aa-aff6b1af0155 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 23 12:12:52 compute-0 nova_compute[185173]: 2026-01-23 12:12:52.594 185177 WARNING nova.compute.manager [req-2729d492-fbfe-489b-9477-ac794ef135f6 req-5d35eaa1-db74-40e3-b306-b7592e2fa417 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] [instance: c471a51f-aa4e-4533-a6fa-9a4716ed23ec] Received unexpected event network-vif-plugged-97528895-56d7-4fcd-b4aa-aff6b1af0155 for instance with vm_state active and task_state None.
Jan 23 12:12:53 compute-0 nova_compute[185173]: 2026-01-23 12:12:53.036 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:12:54 compute-0 nova_compute[185173]: 2026-01-23 12:12:54.619 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:12:55 compute-0 nova_compute[185173]: 2026-01-23 12:12:55.113 185177 DEBUG nova.compute.manager [req-bd489242-1a3b-4dca-a23a-b02a6bfab790 req-7520c83d-3414-40b5-8c35-7b4199b44235 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] [instance: 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7] Received event network-changed-d9faf41e-a824-421e-81f1-bbae06da88f5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 23 12:12:55 compute-0 nova_compute[185173]: 2026-01-23 12:12:55.114 185177 DEBUG nova.compute.manager [req-bd489242-1a3b-4dca-a23a-b02a6bfab790 req-7520c83d-3414-40b5-8c35-7b4199b44235 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] [instance: 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7] Refreshing instance network info cache due to event network-changed-d9faf41e-a824-421e-81f1-bbae06da88f5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 23 12:12:55 compute-0 nova_compute[185173]: 2026-01-23 12:12:55.114 185177 DEBUG oslo_concurrency.lockutils [req-bd489242-1a3b-4dca-a23a-b02a6bfab790 req-7520c83d-3414-40b5-8c35-7b4199b44235 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] Acquiring lock "refresh_cache-9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 23 12:12:55 compute-0 nova_compute[185173]: 2026-01-23 12:12:55.114 185177 DEBUG oslo_concurrency.lockutils [req-bd489242-1a3b-4dca-a23a-b02a6bfab790 req-7520c83d-3414-40b5-8c35-7b4199b44235 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] Acquired lock "refresh_cache-9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 23 12:12:55 compute-0 nova_compute[185173]: 2026-01-23 12:12:55.115 185177 DEBUG nova.network.neutron [req-bd489242-1a3b-4dca-a23a-b02a6bfab790 req-7520c83d-3414-40b5-8c35-7b4199b44235 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] [instance: 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7] Refreshing network info cache for port d9faf41e-a824-421e-81f1-bbae06da88f5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 23 12:12:57 compute-0 nova_compute[185173]: 2026-01-23 12:12:57.872 185177 DEBUG nova.compute.manager [req-442bfc71-c905-406e-ac55-3102f22c7635 req-3cb51461-8cac-4f51-86db-1d7794b710e7 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] [instance: c471a51f-aa4e-4533-a6fa-9a4716ed23ec] Received event network-changed-97528895-56d7-4fcd-b4aa-aff6b1af0155 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 23 12:12:57 compute-0 nova_compute[185173]: 2026-01-23 12:12:57.874 185177 DEBUG nova.compute.manager [req-442bfc71-c905-406e-ac55-3102f22c7635 req-3cb51461-8cac-4f51-86db-1d7794b710e7 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] [instance: c471a51f-aa4e-4533-a6fa-9a4716ed23ec] Refreshing instance network info cache due to event network-changed-97528895-56d7-4fcd-b4aa-aff6b1af0155. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 23 12:12:57 compute-0 nova_compute[185173]: 2026-01-23 12:12:57.875 185177 DEBUG oslo_concurrency.lockutils [req-442bfc71-c905-406e-ac55-3102f22c7635 req-3cb51461-8cac-4f51-86db-1d7794b710e7 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] Acquiring lock "refresh_cache-c471a51f-aa4e-4533-a6fa-9a4716ed23ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 23 12:12:57 compute-0 nova_compute[185173]: 2026-01-23 12:12:57.876 185177 DEBUG oslo_concurrency.lockutils [req-442bfc71-c905-406e-ac55-3102f22c7635 req-3cb51461-8cac-4f51-86db-1d7794b710e7 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] Acquired lock "refresh_cache-c471a51f-aa4e-4533-a6fa-9a4716ed23ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 23 12:12:57 compute-0 nova_compute[185173]: 2026-01-23 12:12:57.876 185177 DEBUG nova.network.neutron [req-442bfc71-c905-406e-ac55-3102f22c7635 req-3cb51461-8cac-4f51-86db-1d7794b710e7 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] [instance: c471a51f-aa4e-4533-a6fa-9a4716ed23ec] Refreshing network info cache for port 97528895-56d7-4fcd-b4aa-aff6b1af0155 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 23 12:12:58 compute-0 nova_compute[185173]: 2026-01-23 12:12:58.038 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:12:58 compute-0 nova_compute[185173]: 2026-01-23 12:12:58.235 185177 DEBUG nova.network.neutron [req-bd489242-1a3b-4dca-a23a-b02a6bfab790 req-7520c83d-3414-40b5-8c35-7b4199b44235 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] [instance: 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7] Updated VIF entry in instance network info cache for port d9faf41e-a824-421e-81f1-bbae06da88f5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 23 12:12:58 compute-0 nova_compute[185173]: 2026-01-23 12:12:58.236 185177 DEBUG nova.network.neutron [req-bd489242-1a3b-4dca-a23a-b02a6bfab790 req-7520c83d-3414-40b5-8c35-7b4199b44235 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] [instance: 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7] Updating instance_info_cache with network_info: [{"id": "d9faf41e-a824-421e-81f1-bbae06da88f5", "address": "fa:16:3e:61:28:24", "network": {"id": "4769a004-5d6e-4d38-99cf-f49693959900", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1719223511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "219dee4c2af34d05ac6e31aa65c35134", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9faf41e-a8", "ovs_interfaceid": "d9faf41e-a824-421e-81f1-bbae06da88f5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 23 12:12:58 compute-0 nova_compute[185173]: 2026-01-23 12:12:58.303 185177 DEBUG oslo_concurrency.lockutils [req-bd489242-1a3b-4dca-a23a-b02a6bfab790 req-7520c83d-3414-40b5-8c35-7b4199b44235 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] Releasing lock "refresh_cache-9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 23 12:12:58 compute-0 podman[249468]: 2026-01-23 12:12:58.740044196 +0000 UTC m=+0.066865987 container health_status d96827cd9c29e53bbdf4cef10942608e4ba405294733072b4aa624c0238e2ed8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent)
Jan 23 12:12:58 compute-0 podman[249466]: 2026-01-23 12:12:58.769923225 +0000 UTC m=+0.104382807 container health_status 48bfd3e93cfb033a8917f154ab637a84f3f60f7609564292c230ce848bae7693 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 23 12:12:58 compute-0 podman[249467]: 2026-01-23 12:12:58.769912744 +0000 UTC m=+0.094372735 container health_status 6ec039018dddd109dd56b3f3912ce4a80c166b5fb98c417c5e3cfbbdfbfbeaad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20260120, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0)
Jan 23 12:12:59 compute-0 nova_compute[185173]: 2026-01-23 12:12:59.623 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:12:59 compute-0 podman[201022]: time="2026-01-23T12:12:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 23 12:12:59 compute-0 podman[201022]: @ - - [23/Jan/2026:12:12:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29740 "" "Go-http-client/1.1"
Jan 23 12:12:59 compute-0 podman[201022]: @ - - [23/Jan/2026:12:12:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4853 "" "Go-http-client/1.1"
Jan 23 12:13:01 compute-0 openstack_network_exporter[204160]: ERROR   12:13:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 23 12:13:01 compute-0 openstack_network_exporter[204160]: 
Jan 23 12:13:01 compute-0 openstack_network_exporter[204160]: ERROR   12:13:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 23 12:13:01 compute-0 openstack_network_exporter[204160]: 
Jan 23 12:13:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:01.458 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 23 12:13:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:01.459 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 23 12:13:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:01.459 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283aa35eb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:13:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:01.460 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f28410bc7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:13:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:01.460 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be810>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283aa35eb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:13:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:01.461 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be840>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283aa35eb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:13:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:01.461 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283aa35eb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:13:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:01.461 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be8a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283aa35eb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:13:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:01.461 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc8f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283aa35eb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:13:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:01.461 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be900>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283aa35eb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:13:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:01.461 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bf140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283aa35eb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:13:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:01.461 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be960>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283aa35eb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:13:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:01.461 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2842f61190>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283aa35eb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:13:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:01.461 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28411c9190>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283aa35eb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:13:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:01.461 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be9c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283aa35eb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:13:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:01.461 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bf1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283aa35eb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:13:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:01.462 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bec00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283aa35eb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:13:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:01.462 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bf440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283aa35eb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:13:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:01.462 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bec60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283aa35eb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:13:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:01.462 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2842f83560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283aa35eb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:13:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:01.462 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283aa35eb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:13:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:01.462 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc5c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283aa35eb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:13:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:01.462 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283aa35eb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:13:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:01.462 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be660>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283aa35eb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:13:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:01.462 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283aa35eb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:13:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:01.462 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283aa35eb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:13:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:01.463 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2842f1af60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283aa35eb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:13:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:01.463 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283aa35eb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:13:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:01.463 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be7b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283aa35eb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:13:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:01.465 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Jan 23 12:13:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:01.466 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}ad70b57d9194f6532b182b578b16289681d355eb6a1afd27a70859dd1387cbc9" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Jan 23 12:13:03 compute-0 nova_compute[185173]: 2026-01-23 12:13:03.040 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:13:03 compute-0 nova_compute[185173]: 2026-01-23 12:13:03.246 185177 DEBUG oslo_concurrency.lockutils [None req-15b98587-b566-4f57-bca3-39da4e72b79b 1eab809d0fb54c0aad115c1f8dbb943b cb61879fc5554da59f69b8ca9516ae29 - - default default] Acquiring lock "c471a51f-aa4e-4533-a6fa-9a4716ed23ec" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 12:13:03 compute-0 nova_compute[185173]: 2026-01-23 12:13:03.247 185177 DEBUG oslo_concurrency.lockutils [None req-15b98587-b566-4f57-bca3-39da4e72b79b 1eab809d0fb54c0aad115c1f8dbb943b cb61879fc5554da59f69b8ca9516ae29 - - default default] Lock "c471a51f-aa4e-4533-a6fa-9a4716ed23ec" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 12:13:03 compute-0 nova_compute[185173]: 2026-01-23 12:13:03.247 185177 DEBUG oslo_concurrency.lockutils [None req-15b98587-b566-4f57-bca3-39da4e72b79b 1eab809d0fb54c0aad115c1f8dbb943b cb61879fc5554da59f69b8ca9516ae29 - - default default] Acquiring lock "c471a51f-aa4e-4533-a6fa-9a4716ed23ec-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 12:13:03 compute-0 nova_compute[185173]: 2026-01-23 12:13:03.248 185177 DEBUG oslo_concurrency.lockutils [None req-15b98587-b566-4f57-bca3-39da4e72b79b 1eab809d0fb54c0aad115c1f8dbb943b cb61879fc5554da59f69b8ca9516ae29 - - default default] Lock "c471a51f-aa4e-4533-a6fa-9a4716ed23ec-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 12:13:03 compute-0 nova_compute[185173]: 2026-01-23 12:13:03.248 185177 DEBUG oslo_concurrency.lockutils [None req-15b98587-b566-4f57-bca3-39da4e72b79b 1eab809d0fb54c0aad115c1f8dbb943b cb61879fc5554da59f69b8ca9516ae29 - - default default] Lock "c471a51f-aa4e-4533-a6fa-9a4716ed23ec-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 12:13:03 compute-0 nova_compute[185173]: 2026-01-23 12:13:03.250 185177 INFO nova.compute.manager [None req-15b98587-b566-4f57-bca3-39da4e72b79b 1eab809d0fb54c0aad115c1f8dbb943b cb61879fc5554da59f69b8ca9516ae29 - - default default] [instance: c471a51f-aa4e-4533-a6fa-9a4716ed23ec] Terminating instance
Jan 23 12:13:03 compute-0 nova_compute[185173]: 2026-01-23 12:13:03.252 185177 DEBUG nova.compute.manager [None req-15b98587-b566-4f57-bca3-39da4e72b79b 1eab809d0fb54c0aad115c1f8dbb943b cb61879fc5554da59f69b8ca9516ae29 - - default default] [instance: c471a51f-aa4e-4533-a6fa-9a4716ed23ec] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 23 12:13:03 compute-0 kernel: tap97528895-56 (unregistering): left promiscuous mode
Jan 23 12:13:03 compute-0 NetworkManager[56133]: <info>  [1769170383.2829] device (tap97528895-56): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 23 12:13:03 compute-0 nova_compute[185173]: 2026-01-23 12:13:03.324 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:13:03 compute-0 ovn_controller[97581]: 2026-01-23T12:13:03Z|00078|binding|INFO|Releasing lport 97528895-56d7-4fcd-b4aa-aff6b1af0155 from this chassis (sb_readonly=0)
Jan 23 12:13:03 compute-0 ovn_controller[97581]: 2026-01-23T12:13:03Z|00079|binding|INFO|Setting lport 97528895-56d7-4fcd-b4aa-aff6b1af0155 down in Southbound
Jan 23 12:13:03 compute-0 ovn_controller[97581]: 2026-01-23T12:13:03Z|00080|binding|INFO|Removing iface tap97528895-56 ovn-installed in OVS
Jan 23 12:13:03 compute-0 nova_compute[185173]: 2026-01-23 12:13:03.339 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:13:03 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:13:03.363 106832 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7b:31:0f 10.100.0.9'], port_security=['fa:16:3e:7b:31:0f 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'c471a51f-aa4e-4533-a6fa-9a4716ed23ec', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6aefc98d-c645-43fb-8c17-03d341d4ab6a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cb61879fc5554da59f69b8ca9516ae29', 'neutron:revision_number': '4', 'neutron:security_group_ids': '20c4407d-d9a6-4ecc-97ee-228b43dcca88', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.183'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=49648ccd-57b0-4876-96c0-cf7bc10ed8b3, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fceaba80790>], logical_port=97528895-56d7-4fcd-b4aa-aff6b1af0155) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fceaba80790>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 23 12:13:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:03.366 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1996 Content-Type: application/json Date: Fri, 23 Jan 2026 12:13:01 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-e8274ee6-cf63-4889-ad39-1c21c00885b9 x-openstack-request-id: req-e8274ee6-cf63-4889-ad39-1c21c00885b9 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Jan 23 12:13:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:03.367 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7", "name": "tempest-AttachInterfacesUnderV243Test-server-1715966339", "status": "ACTIVE", "tenant_id": "219dee4c2af34d05ac6e31aa65c35134", "user_id": "e0e1cef9ff584692b12674d39ab8e57c", "metadata": {}, "hostId": "01bfad26ed194497ca271cba27fe8e3f7de14872f43ea610f4cc97e4", "image": {"id": "701e8d50-6f04-4dc4-b857-9ce72ee86552", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/701e8d50-6f04-4dc4-b857-9ce72ee86552"}]}, "flavor": {"id": "e853bd28-b25f-4198-9e4c-86f25bfca225", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/e853bd28-b25f-4198-9e4c-86f25bfca225"}]}, "created": "2026-01-23T12:12:24Z", "updated": "2026-01-23T12:12:43Z", "addresses": {"tempest-AttachInterfacesUnderV243Test-1719223511-network": [{"version": 4, "addr": "10.100.0.9", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:61:28:24"}, {"version": 4, "addr": "192.168.122.200", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:61:28:24"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": "tempest-keypair-1350791428", "OS-SRV-USG:launched_at": "2026-01-23T12:12:43.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "tempest-securitygroup--795267958"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000006", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Jan 23 12:13:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:03.367 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7 used request id req-e8274ee6-cf63-4889-ad39-1c21c00885b9 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Jan 23 12:13:03 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:13:03.366 106832 INFO neutron.agent.ovn.metadata.agent [-] Port 97528895-56d7-4fcd-b4aa-aff6b1af0155 in datapath 6aefc98d-c645-43fb-8c17-03d341d4ab6a unbound from our chassis
Jan 23 12:13:03 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:13:03.368 106832 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6aefc98d-c645-43fb-8c17-03d341d4ab6a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 23 12:13:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:03.369 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7', 'name': 'tempest-AttachInterfacesUnderV243Test-server-1715966339', 'flavor': {'id': 'e853bd28-b25f-4198-9e4c-86f25bfca225', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '701e8d50-6f04-4dc4-b857-9ce72ee86552'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000006', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '219dee4c2af34d05ac6e31aa65c35134', 'user_id': 'e0e1cef9ff584692b12674d39ab8e57c', 'hostId': '01bfad26ed194497ca271cba27fe8e3f7de14872f43ea610f4cc97e4', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 23 12:13:03 compute-0 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000007.scope: Deactivated successfully.
Jan 23 12:13:03 compute-0 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000007.scope: Consumed 15.122s CPU time.
Jan 23 12:13:03 compute-0 nova_compute[185173]: 2026-01-23 12:13:03.373 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:13:03 compute-0 systemd-machined[156550]: Machine qemu-7-instance-00000007 terminated.
Jan 23 12:13:03 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:13:03.370 238267 DEBUG oslo.privsep.daemon [-] privsep: reply[f27c0fe8-9ed3-4ca9-b760-1a12e88ad6b5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 12:13:03 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:13:03.380 106832 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6aefc98d-c645-43fb-8c17-03d341d4ab6a namespace which is not needed anymore
Jan 23 12:13:03 compute-0 ovn_controller[97581]: 2026-01-23T12:13:03Z|00081|binding|INFO|Releasing lport 3ff9aeaa-ee49-4944-b05b-84916e5e2516 from this chassis (sb_readonly=0)
Jan 23 12:13:03 compute-0 ovn_controller[97581]: 2026-01-23T12:13:03Z|00082|binding|INFO|Releasing lport 9cbf67d5-0442-4a05-87a4-97f78502296a from this chassis (sb_readonly=0)
Jan 23 12:13:03 compute-0 nova_compute[185173]: 2026-01-23 12:13:03.463 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:13:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:03.535 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance c471a51f-aa4e-4533-a6fa-9a4716ed23ec from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Jan 23 12:13:03 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:03.537 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/c471a51f-aa4e-4533-a6fa-9a4716ed23ec -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}ad70b57d9194f6532b182b578b16289681d355eb6a1afd27a70859dd1387cbc9" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Jan 23 12:13:03 compute-0 nova_compute[185173]: 2026-01-23 12:13:03.539 185177 INFO nova.virt.libvirt.driver [-] [instance: c471a51f-aa4e-4533-a6fa-9a4716ed23ec] Instance destroyed successfully.
Jan 23 12:13:03 compute-0 nova_compute[185173]: 2026-01-23 12:13:03.540 185177 DEBUG nova.objects.instance [None req-15b98587-b566-4f57-bca3-39da4e72b79b 1eab809d0fb54c0aad115c1f8dbb943b cb61879fc5554da59f69b8ca9516ae29 - - default default] Lazy-loading 'resources' on Instance uuid c471a51f-aa4e-4533-a6fa-9a4716ed23ec obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 23 12:13:03 compute-0 nova_compute[185173]: 2026-01-23 12:13:03.552 185177 DEBUG nova.virt.libvirt.vif [None req-15b98587-b566-4f57-bca3-39da4e72b79b 1eab809d0fb54c0aad115c1f8dbb943b cb61879fc5554da59f69b8ca9516ae29 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-23T12:12:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1686603901',display_name='tempest-ServersTestJSON-server-1686603901',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1686603901',id=7,image_ref='701e8d50-6f04-4dc4-b857-9ce72ee86552',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBARV7Qsnn5noWYkAfH1/R0v6YRz6i0V3zKYue4g9EFh+/mSMhNE90PZs0Gd5IWFMJ45aIBp7G+ZcSxXnIQIxk+0JErYjG6yNUMZw+LgAqxXqzrzGG+Zhyo3jWgYZuKHw2A==',key_name='tempest-keypair-765321846',keypairs=<?>,launch_index=0,launched_at=2026-01-23T12:12:49Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='cb61879fc5554da59f69b8ca9516ae29',ramdisk_id='',reservation_id='r-vgvzwdb9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='701e8d50-6f04-4dc4-b857-9ce72ee86552',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-1812239947',owner_user_name='tempest-ServersTestJSON-1812239947-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-23T12:12:49Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1eab809d0fb54c0aad115c1f8dbb943b',uuid=c471a51f-aa4e-4533-a6fa-9a4716ed23ec,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "97528895-56d7-4fcd-b4aa-aff6b1af0155", "address": "fa:16:3e:7b:31:0f", "network": {"id": "6aefc98d-c645-43fb-8c17-03d341d4ab6a", "bridge": "br-int", "label": "tempest-ServersTestJSON-2095190510-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cb61879fc5554da59f69b8ca9516ae29", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97528895-56", "ovs_interfaceid": "97528895-56d7-4fcd-b4aa-aff6b1af0155", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 23 12:13:03 compute-0 nova_compute[185173]: 2026-01-23 12:13:03.553 185177 DEBUG nova.network.os_vif_util [None req-15b98587-b566-4f57-bca3-39da4e72b79b 1eab809d0fb54c0aad115c1f8dbb943b cb61879fc5554da59f69b8ca9516ae29 - - default default] Converting VIF {"id": "97528895-56d7-4fcd-b4aa-aff6b1af0155", "address": "fa:16:3e:7b:31:0f", "network": {"id": "6aefc98d-c645-43fb-8c17-03d341d4ab6a", "bridge": "br-int", "label": "tempest-ServersTestJSON-2095190510-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cb61879fc5554da59f69b8ca9516ae29", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97528895-56", "ovs_interfaceid": "97528895-56d7-4fcd-b4aa-aff6b1af0155", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 23 12:13:03 compute-0 nova_compute[185173]: 2026-01-23 12:13:03.554 185177 DEBUG nova.network.os_vif_util [None req-15b98587-b566-4f57-bca3-39da4e72b79b 1eab809d0fb54c0aad115c1f8dbb943b cb61879fc5554da59f69b8ca9516ae29 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7b:31:0f,bridge_name='br-int',has_traffic_filtering=True,id=97528895-56d7-4fcd-b4aa-aff6b1af0155,network=Network(6aefc98d-c645-43fb-8c17-03d341d4ab6a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap97528895-56') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 23 12:13:03 compute-0 nova_compute[185173]: 2026-01-23 12:13:03.554 185177 DEBUG os_vif [None req-15b98587-b566-4f57-bca3-39da4e72b79b 1eab809d0fb54c0aad115c1f8dbb943b cb61879fc5554da59f69b8ca9516ae29 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7b:31:0f,bridge_name='br-int',has_traffic_filtering=True,id=97528895-56d7-4fcd-b4aa-aff6b1af0155,network=Network(6aefc98d-c645-43fb-8c17-03d341d4ab6a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap97528895-56') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 23 12:13:03 compute-0 nova_compute[185173]: 2026-01-23 12:13:03.556 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:13:03 compute-0 nova_compute[185173]: 2026-01-23 12:13:03.556 185177 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap97528895-56, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 23 12:13:03 compute-0 nova_compute[185173]: 2026-01-23 12:13:03.558 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:13:03 compute-0 nova_compute[185173]: 2026-01-23 12:13:03.560 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 23 12:13:03 compute-0 nova_compute[185173]: 2026-01-23 12:13:03.564 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:13:03 compute-0 nova_compute[185173]: 2026-01-23 12:13:03.567 185177 INFO os_vif [None req-15b98587-b566-4f57-bca3-39da4e72b79b 1eab809d0fb54c0aad115c1f8dbb943b cb61879fc5554da59f69b8ca9516ae29 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7b:31:0f,bridge_name='br-int',has_traffic_filtering=True,id=97528895-56d7-4fcd-b4aa-aff6b1af0155,network=Network(6aefc98d-c645-43fb-8c17-03d341d4ab6a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap97528895-56')
Jan 23 12:13:03 compute-0 nova_compute[185173]: 2026-01-23 12:13:03.567 185177 INFO nova.virt.libvirt.driver [None req-15b98587-b566-4f57-bca3-39da4e72b79b 1eab809d0fb54c0aad115c1f8dbb943b cb61879fc5554da59f69b8ca9516ae29 - - default default] [instance: c471a51f-aa4e-4533-a6fa-9a4716ed23ec] Deleting instance files /var/lib/nova/instances/c471a51f-aa4e-4533-a6fa-9a4716ed23ec_del
Jan 23 12:13:03 compute-0 nova_compute[185173]: 2026-01-23 12:13:03.568 185177 INFO nova.virt.libvirt.driver [None req-15b98587-b566-4f57-bca3-39da4e72b79b 1eab809d0fb54c0aad115c1f8dbb943b cb61879fc5554da59f69b8ca9516ae29 - - default default] [instance: c471a51f-aa4e-4533-a6fa-9a4716ed23ec] Deletion of /var/lib/nova/instances/c471a51f-aa4e-4533-a6fa-9a4716ed23ec_del complete
Jan 23 12:13:03 compute-0 neutron-haproxy-ovnmeta-6aefc98d-c645-43fb-8c17-03d341d4ab6a[249429]: [NOTICE]   (249433) : haproxy version is 2.8.14-c23fe91
Jan 23 12:13:03 compute-0 neutron-haproxy-ovnmeta-6aefc98d-c645-43fb-8c17-03d341d4ab6a[249429]: [NOTICE]   (249433) : path to executable is /usr/sbin/haproxy
Jan 23 12:13:03 compute-0 neutron-haproxy-ovnmeta-6aefc98d-c645-43fb-8c17-03d341d4ab6a[249429]: [ALERT]    (249433) : Current worker (249435) exited with code 143 (Terminated)
Jan 23 12:13:03 compute-0 neutron-haproxy-ovnmeta-6aefc98d-c645-43fb-8c17-03d341d4ab6a[249429]: [WARNING]  (249433) : All workers exited. Exiting... (0)
Jan 23 12:13:03 compute-0 systemd[1]: libpod-e4bbe95ca0a437641bbde8fc7ebf1e5766b1c79772ec7984e39c2cbd50ae15aa.scope: Deactivated successfully.
Jan 23 12:13:03 compute-0 podman[249554]: 2026-01-23 12:13:03.587165264 +0000 UTC m=+0.072191350 container died e4bbe95ca0a437641bbde8fc7ebf1e5766b1c79772ec7984e39c2cbd50ae15aa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6aefc98d-c645-43fb-8c17-03d341d4ab6a, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Jan 23 12:13:03 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e4bbe95ca0a437641bbde8fc7ebf1e5766b1c79772ec7984e39c2cbd50ae15aa-userdata-shm.mount: Deactivated successfully.
Jan 23 12:13:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-98a05be9ae018304db06664e53d91e3ed7c2139b7b633c0ad3584203789adf30-merged.mount: Deactivated successfully.
Jan 23 12:13:03 compute-0 podman[249554]: 2026-01-23 12:13:03.659395494 +0000 UTC m=+0.144421580 container cleanup e4bbe95ca0a437641bbde8fc7ebf1e5766b1c79772ec7984e39c2cbd50ae15aa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6aefc98d-c645-43fb-8c17-03d341d4ab6a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Jan 23 12:13:03 compute-0 systemd[1]: libpod-conmon-e4bbe95ca0a437641bbde8fc7ebf1e5766b1c79772ec7984e39c2cbd50ae15aa.scope: Deactivated successfully.
Jan 23 12:13:03 compute-0 nova_compute[185173]: 2026-01-23 12:13:03.737 185177 DEBUG nova.network.neutron [req-442bfc71-c905-406e-ac55-3102f22c7635 req-3cb51461-8cac-4f51-86db-1d7794b710e7 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] [instance: c471a51f-aa4e-4533-a6fa-9a4716ed23ec] Updated VIF entry in instance network info cache for port 97528895-56d7-4fcd-b4aa-aff6b1af0155. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 23 12:13:03 compute-0 nova_compute[185173]: 2026-01-23 12:13:03.738 185177 DEBUG nova.network.neutron [req-442bfc71-c905-406e-ac55-3102f22c7635 req-3cb51461-8cac-4f51-86db-1d7794b710e7 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] [instance: c471a51f-aa4e-4533-a6fa-9a4716ed23ec] Updating instance_info_cache with network_info: [{"id": "97528895-56d7-4fcd-b4aa-aff6b1af0155", "address": "fa:16:3e:7b:31:0f", "network": {"id": "6aefc98d-c645-43fb-8c17-03d341d4ab6a", "bridge": "br-int", "label": "tempest-ServersTestJSON-2095190510-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cb61879fc5554da59f69b8ca9516ae29", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97528895-56", "ovs_interfaceid": "97528895-56d7-4fcd-b4aa-aff6b1af0155", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 23 12:13:03 compute-0 podman[249592]: 2026-01-23 12:13:03.76020535 +0000 UTC m=+0.075049121 container remove e4bbe95ca0a437641bbde8fc7ebf1e5766b1c79772ec7984e39c2cbd50ae15aa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6aefc98d-c645-43fb-8c17-03d341d4ab6a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 23 12:13:03 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:13:03.768 238267 DEBUG oslo.privsep.daemon [-] privsep: reply[65f8ddf2-09f0-4a63-8f78-f87b1b7d1832]: (4, ('Fri Jan 23 12:13:03 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-6aefc98d-c645-43fb-8c17-03d341d4ab6a (e4bbe95ca0a437641bbde8fc7ebf1e5766b1c79772ec7984e39c2cbd50ae15aa)\ne4bbe95ca0a437641bbde8fc7ebf1e5766b1c79772ec7984e39c2cbd50ae15aa\nFri Jan 23 12:13:03 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-6aefc98d-c645-43fb-8c17-03d341d4ab6a (e4bbe95ca0a437641bbde8fc7ebf1e5766b1c79772ec7984e39c2cbd50ae15aa)\ne4bbe95ca0a437641bbde8fc7ebf1e5766b1c79772ec7984e39c2cbd50ae15aa\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 12:13:03 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:13:03.770 238267 DEBUG oslo.privsep.daemon [-] privsep: reply[0b4bdea0-0682-420f-94a6-f25495649d0b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 12:13:03 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:13:03.771 106832 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6aefc98d-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 23 12:13:03 compute-0 nova_compute[185173]: 2026-01-23 12:13:03.773 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:13:03 compute-0 nova_compute[185173]: 2026-01-23 12:13:03.795 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:13:03 compute-0 kernel: tap6aefc98d-c0: left promiscuous mode
Jan 23 12:13:03 compute-0 nova_compute[185173]: 2026-01-23 12:13:03.800 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:13:03 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:13:03.803 238267 DEBUG oslo.privsep.daemon [-] privsep: reply[59679060-c9bd-424b-82eb-e37589fecbca]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 12:13:03 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:13:03.820 238267 DEBUG oslo.privsep.daemon [-] privsep: reply[2dc59bd4-a367-4d66-b44f-0f72e1f5e703]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 12:13:03 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:13:03.821 238267 DEBUG oslo.privsep.daemon [-] privsep: reply[34bae3dc-f491-4c9d-a105-77a1c45880b4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 12:13:03 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:13:03.837 238267 DEBUG oslo.privsep.daemon [-] privsep: reply[4da628ad-e118-4206-a522-ed39c35f4118]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 528710, 'reachable_time': 25342, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 249605, 'error': None, 'target': 'ovnmeta-6aefc98d-c645-43fb-8c17-03d341d4ab6a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 12:13:03 compute-0 systemd[1]: run-netns-ovnmeta\x2d6aefc98d\x2dc645\x2d43fb\x2d8c17\x2d03d341d4ab6a.mount: Deactivated successfully.
Jan 23 12:13:03 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:13:03.840 107372 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6aefc98d-c645-43fb-8c17-03d341d4ab6a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 23 12:13:03 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:13:03.840 107372 DEBUG oslo.privsep.daemon [-] privsep: reply[ce7368de-14e5-4c3a-b8a6-00285a8884a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 12:13:04 compute-0 nova_compute[185173]: 2026-01-23 12:13:04.537 185177 INFO nova.compute.manager [None req-15b98587-b566-4f57-bca3-39da4e72b79b 1eab809d0fb54c0aad115c1f8dbb943b cb61879fc5554da59f69b8ca9516ae29 - - default default] [instance: c471a51f-aa4e-4533-a6fa-9a4716ed23ec] Took 1.28 seconds to destroy the instance on the hypervisor.
Jan 23 12:13:04 compute-0 nova_compute[185173]: 2026-01-23 12:13:04.538 185177 DEBUG oslo.service.loopingcall [None req-15b98587-b566-4f57-bca3-39da4e72b79b 1eab809d0fb54c0aad115c1f8dbb943b cb61879fc5554da59f69b8ca9516ae29 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 23 12:13:04 compute-0 nova_compute[185173]: 2026-01-23 12:13:04.538 185177 DEBUG nova.compute.manager [-] [instance: c471a51f-aa4e-4533-a6fa-9a4716ed23ec] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 23 12:13:04 compute-0 nova_compute[185173]: 2026-01-23 12:13:04.539 185177 DEBUG nova.network.neutron [-] [instance: c471a51f-aa4e-4533-a6fa-9a4716ed23ec] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 23 12:13:04 compute-0 nova_compute[185173]: 2026-01-23 12:13:04.541 185177 DEBUG oslo_concurrency.lockutils [req-442bfc71-c905-406e-ac55-3102f22c7635 req-3cb51461-8cac-4f51-86db-1d7794b710e7 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] Releasing lock "refresh_cache-c471a51f-aa4e-4533-a6fa-9a4716ed23ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 23 12:13:05 compute-0 podman[249606]: 2026-01-23 12:13:05.761834347 +0000 UTC m=+0.096994831 container health_status 1cc877fed4914980324cf4c0d6ba23743fd113442cee4d49cc1a59e402757170 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 23 12:13:08 compute-0 nova_compute[185173]: 2026-01-23 12:13:08.043 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.515 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1889 Content-Type: application/json Date: Fri, 23 Jan 2026 12:13:03 GMT Keep-Alive: timeout=5, max=99 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-24016f57-6ba6-447e-bd0c-542113273518 x-openstack-request-id: req-24016f57-6ba6-447e-bd0c-542113273518 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.515 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "c471a51f-aa4e-4533-a6fa-9a4716ed23ec", "name": "tempest-ServersTestJSON-server-1686603901", "status": "ACTIVE", "tenant_id": "cb61879fc5554da59f69b8ca9516ae29", "user_id": "1eab809d0fb54c0aad115c1f8dbb943b", "metadata": {"hello": "world"}, "hostId": "9b499e525c28916abb04d268a85bed62580405d2d1969d7f36dd8f4c", "image": {"id": "701e8d50-6f04-4dc4-b857-9ce72ee86552", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/701e8d50-6f04-4dc4-b857-9ce72ee86552"}]}, "flavor": {"id": "e853bd28-b25f-4198-9e4c-86f25bfca225", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/e853bd28-b25f-4198-9e4c-86f25bfca225"}]}, "created": "2026-01-23T12:12:27Z", "updated": "2026-01-23T12:13:02Z", "addresses": {"tempest-ServersTestJSON-2095190510-network": [{"version": 4, "addr": "10.100.0.9", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:7b:31:0f"}]}, "accessIPv4": "1.1.1.1", "accessIPv6": "::babe:dc0c:1602", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/c471a51f-aa4e-4533-a6fa-9a4716ed23ec"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/c471a51f-aa4e-4533-a6fa-9a4716ed23ec"}], "OS-DCF:diskConfig": "AUTO", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": "tempest-keypair-765321846", "OS-SRV-USG:launched_at": "2026-01-23T12:12:49.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "tempest-securitygroup--1301632915"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000007", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": "deleting", "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.516 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/c471a51f-aa4e-4533-a6fa-9a4716ed23ec used request id req-24016f57-6ba6-447e-bd0c-542113273518 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: libvirt: QEMU Driver error : Domain not found: no domain with matching uuid 'c471a51f-aa4e-4533-a6fa-9a4716ed23ec' (instance-00000007)
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.519 14 ERROR ceilometer.polling.manager [-] Unable to discover resources: Domain not found: no domain with matching uuid 'c471a51f-aa4e-4533-a6fa-9a4716ed23ec' (instance-00000007): libvirt.libvirtError: Domain not found: no domain with matching uuid 'c471a51f-aa4e-4533-a6fa-9a4716ed23ec' (instance-00000007)
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.519 14 ERROR ceilometer.polling.manager Traceback (most recent call last):
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.519 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/polling/manager.py", line 959, in discover
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.519 14 ERROR ceilometer.polling.manager     discovered = discoverer.discover(self, param)
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.519 14 ERROR ceilometer.polling.manager                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.519 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 125, in discover
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.519 14 ERROR ceilometer.polling.manager     return self.discover_libvirt_polling(manager, param=None)
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.519 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.519 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 289, in wrapped_f
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.519 14 ERROR ceilometer.polling.manager     return self(f, *args, **kw)
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.519 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.519 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 379, in __call__
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.519 14 ERROR ceilometer.polling.manager     do = self.iter(retry_state=retry_state)
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.519 14 ERROR ceilometer.polling.manager          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.519 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 314, in iter
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.519 14 ERROR ceilometer.polling.manager     return fut.result()
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.519 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.519 14 ERROR ceilometer.polling.manager   File "/usr/lib64/python3.12/concurrent/futures/_base.py", line 449, in result
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.519 14 ERROR ceilometer.polling.manager     return self.__get_result()
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.519 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.519 14 ERROR ceilometer.polling.manager   File "/usr/lib64/python3.12/concurrent/futures/_base.py", line 401, in __get_result
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.519 14 ERROR ceilometer.polling.manager     raise self._exception
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.519 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 382, in __call__
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.519 14 ERROR ceilometer.polling.manager     result = fn(*args, **kwargs)
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.519 14 ERROR ceilometer.polling.manager              ^^^^^^^^^^^^^^^^^^^
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.519 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 274, in discover_libvirt_polling
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.519 14 ERROR ceilometer.polling.manager     dom_state = domain.state()[0]
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.519 14 ERROR ceilometer.polling.manager                 ^^^^^^^^^^^^^^
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.519 14 ERROR ceilometer.polling.manager   File "/usr/lib64/python3.12/site-packages/libvirt.py", line 3271, in state
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.519 14 ERROR ceilometer.polling.manager     raise libvirtError('virDomainGetState() failed')
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.519 14 ERROR ceilometer.polling.manager libvirt.libvirtError: Domain not found: no domain with matching uuid 'c471a51f-aa4e-4533-a6fa-9a4716ed23ec' (instance-00000007)
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.519 14 ERROR ceilometer.polling.manager 
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.522 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.522 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f28410be7e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.531 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7', 'name': 'tempest-AttachInterfacesUnderV243Test-server-1715966339', 'flavor': {'id': 'e853bd28-b25f-4198-9e4c-86f25bfca225', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '701e8d50-6f04-4dc4-b857-9ce72ee86552'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000006', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '219dee4c2af34d05ac6e31aa65c35134', 'user_id': 'e0e1cef9ff584692b12674d39ab8e57c', 'hostId': '01bfad26ed194497ca271cba27fe8e3f7de14872f43ea610f4cc97e4', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.531 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.532 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410be810>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.532 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410be810>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.532 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.533 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-01-23T12:13:08.532539) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.556 14 DEBUG ceilometer.compute.pollsters [-] 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7/disk.device.usage volume: 196624 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.557 14 DEBUG ceilometer.compute.pollsters [-] 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.557 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.558 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f28411c9b80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.558 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.558 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410be840>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.558 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410be840>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.559 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.559 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-01-23T12:13:08.559057) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 12:13:08 compute-0 nova_compute[185173]: 2026-01-23 12:13:08.559 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.614 14 DEBUG ceilometer.compute.pollsters [-] 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.615 14 DEBUG ceilometer.compute.pollsters [-] 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.616 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.616 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f28410bc830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.616 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.617 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bc860>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.617 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bc860>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.618 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.618 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2026-01-23T12:13:08.617929) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.618 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.619 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: tempest-AttachInterfacesUnderV243Test-server-1715966339>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: tempest-AttachInterfacesUnderV243Test-server-1715966339>]
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.619 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f28410be870>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.619 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.620 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410be8a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.620 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410be8a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.621 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.621 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-01-23T12:13:08.620852) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.621 14 DEBUG ceilometer.compute.pollsters [-] 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.622 14 DEBUG ceilometer.compute.pollsters [-] 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.623 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.623 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f28410bc8c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.623 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.624 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bc8f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.624 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bc8f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.624 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.625 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-01-23T12:13:08.624758) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.632 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7 / tapd9faf41e-a8 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.632 14 DEBUG ceilometer.compute.pollsters [-] 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.633 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.634 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f28410be8d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.634 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.634 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410be900>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.635 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410be900>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.635 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.636 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-01-23T12:13:08.635682) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.636 14 DEBUG ceilometer.compute.pollsters [-] 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.637 14 DEBUG ceilometer.compute.pollsters [-] 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.637 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.638 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f28410bef30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.638 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.638 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bf140>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.639 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bf140>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.639 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.640 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-01-23T12:13:08.639665) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.640 14 DEBUG ceilometer.compute.pollsters [-] 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.641 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.641 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f28410be930>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.641 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.642 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410be960>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.642 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410be960>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.643 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.643 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-01-23T12:13:08.642904) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.644 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.644 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f28410be750>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.644 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.645 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f2842f61190>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.645 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f2842f61190>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.645 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.646 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-01-23T12:13:08.645778) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.646 14 DEBUG ceilometer.compute.pollsters [-] 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7/disk.device.read.latency volume: 322364762 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.647 14 DEBUG ceilometer.compute.pollsters [-] 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7/disk.device.read.latency volume: 1941808 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.647 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.648 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f28411a4c50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.648 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.648 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28411c9190>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.649 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28411c9190>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.649 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.650 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-01-23T12:13:08.649447) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.650 14 DEBUG ceilometer.compute.pollsters [-] 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7/disk.device.allocation volume: 204800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.650 14 DEBUG ceilometer.compute.pollsters [-] 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.651 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.651 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f28410be990>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.651 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.652 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410be9c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.652 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410be9c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.652 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.653 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-01-23T12:13:08.652543) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.653 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.653 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f28410bf1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.653 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.653 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bf1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.654 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bf1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.654 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.654 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-01-23T12:13:08.654339) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.654 14 DEBUG ceilometer.compute.pollsters [-] 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.655 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.655 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f28410bebd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.655 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.655 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bec00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.655 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bec00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.656 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.656 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-01-23T12:13:08.656087) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.687 14 DEBUG ceilometer.compute.pollsters [-] 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7/memory.usage volume: Unavailable _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.687 14 WARNING ceilometer.compute.pollsters [-] memory.usage statistic in not available for instance 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7: ceilometer.compute.pollsters.NoVolumeException
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.687 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.687 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f28410bf410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.687 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.687 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bf440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.687 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bf440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.687 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.688 14 DEBUG ceilometer.compute.pollsters [-] 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.688 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.688 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f28410bec30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.688 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.688 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bec60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.688 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bec60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.688 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-01-23T12:13:08.687859) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.689 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.689 14 DEBUG ceilometer.compute.pollsters [-] 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7/network.incoming.bytes volume: 90 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.689 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-01-23T12:13:08.688865) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.689 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.690 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f28410bcfb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.690 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.690 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f2842f83560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.690 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f2842f83560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.690 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.690 14 DEBUG ceilometer.compute.pollsters [-] 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7/cpu volume: 25100000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.690 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-01-23T12:13:08.690418) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.691 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.691 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f28410bc920>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.691 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.692 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bc590>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.692 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bc590>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.692 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.693 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.693 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2026-01-23T12:13:08.692492) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.693 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: tempest-AttachInterfacesUnderV243Test-server-1715966339>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: tempest-AttachInterfacesUnderV243Test-server-1715966339>]
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.693 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f28410bc5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.693 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.693 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bc5c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.693 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bc5c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.693 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.694 14 DEBUG ceilometer.compute.pollsters [-] 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7/network.incoming.packets volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.694 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-01-23T12:13:08.693707) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.694 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.694 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f28410bc890>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.695 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.695 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bc650>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.695 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bc650>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.695 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.696 14 DEBUG ceilometer.compute.pollsters [-] 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.696 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-01-23T12:13:08.695728) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.696 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.696 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f28410be720>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.696 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.696 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410be660>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.696 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410be660>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.697 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.697 14 DEBUG ceilometer.compute.pollsters [-] 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7/disk.device.read.bytes volume: 23775232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.697 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-01-23T12:13:08.696963) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.697 14 DEBUG ceilometer.compute.pollsters [-] 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7/disk.device.read.bytes volume: 2048 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.698 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.698 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f28410bc6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.698 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.698 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bc680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.699 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bc680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.699 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.699 14 DEBUG ceilometer.compute.pollsters [-] 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7/network.outgoing.packets volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.699 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-01-23T12:13:08.699310) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.700 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.700 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f28410bec90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.700 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.700 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bc6e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.700 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bc6e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.700 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.701 14 DEBUG ceilometer.compute.pollsters [-] 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.701 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-01-23T12:13:08.700699) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.701 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.701 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f284322b260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.702 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.702 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f2842f1af60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.702 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f2842f1af60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.702 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.703 14 DEBUG ceilometer.compute.pollsters [-] 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.703 14 DEBUG ceilometer.compute.pollsters [-] 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.703 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-01-23T12:13:08.702687) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.703 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.703 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f28410bc740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.704 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.704 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bc770>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.704 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bc770>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.704 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.704 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-01-23T12:13:08.704216) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.704 14 DEBUG ceilometer.compute.pollsters [-] 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7/network.outgoing.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.705 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.705 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f28410be780>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.705 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.705 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410be7b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.706 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410be7b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.706 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.706 14 DEBUG ceilometer.compute.pollsters [-] 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7/disk.device.read.requests volume: 760 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.706 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-01-23T12:13:08.706252) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.707 14 DEBUG ceilometer.compute.pollsters [-] 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7/disk.device.read.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.707 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.708 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.708 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.709 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.710 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.711 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.711 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.712 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.712 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.713 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.713 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.714 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.714 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.715 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.716 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.716 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.717 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.717 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.718 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.718 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.719 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.719 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.720 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.720 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.721 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.721 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:13:08 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:13:08.722 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:13:08 compute-0 nova_compute[185173]: 2026-01-23 12:13:08.907 185177 DEBUG nova.compute.manager [req-4c2580eb-3d42-494c-86a1-ca791c82dce0 req-0d4b5d0a-1d66-4de4-8c21-8be2d581ff58 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] [instance: c471a51f-aa4e-4533-a6fa-9a4716ed23ec] Received event network-vif-unplugged-97528895-56d7-4fcd-b4aa-aff6b1af0155 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 23 12:13:08 compute-0 nova_compute[185173]: 2026-01-23 12:13:08.908 185177 DEBUG oslo_concurrency.lockutils [req-4c2580eb-3d42-494c-86a1-ca791c82dce0 req-0d4b5d0a-1d66-4de4-8c21-8be2d581ff58 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] Acquiring lock "c471a51f-aa4e-4533-a6fa-9a4716ed23ec-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 12:13:08 compute-0 nova_compute[185173]: 2026-01-23 12:13:08.909 185177 DEBUG oslo_concurrency.lockutils [req-4c2580eb-3d42-494c-86a1-ca791c82dce0 req-0d4b5d0a-1d66-4de4-8c21-8be2d581ff58 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] Lock "c471a51f-aa4e-4533-a6fa-9a4716ed23ec-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 12:13:08 compute-0 nova_compute[185173]: 2026-01-23 12:13:08.910 185177 DEBUG oslo_concurrency.lockutils [req-4c2580eb-3d42-494c-86a1-ca791c82dce0 req-0d4b5d0a-1d66-4de4-8c21-8be2d581ff58 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] Lock "c471a51f-aa4e-4533-a6fa-9a4716ed23ec-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 12:13:08 compute-0 nova_compute[185173]: 2026-01-23 12:13:08.911 185177 DEBUG nova.compute.manager [req-4c2580eb-3d42-494c-86a1-ca791c82dce0 req-0d4b5d0a-1d66-4de4-8c21-8be2d581ff58 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] [instance: c471a51f-aa4e-4533-a6fa-9a4716ed23ec] No waiting events found dispatching network-vif-unplugged-97528895-56d7-4fcd-b4aa-aff6b1af0155 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 23 12:13:08 compute-0 nova_compute[185173]: 2026-01-23 12:13:08.911 185177 DEBUG nova.compute.manager [req-4c2580eb-3d42-494c-86a1-ca791c82dce0 req-0d4b5d0a-1d66-4de4-8c21-8be2d581ff58 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] [instance: c471a51f-aa4e-4533-a6fa-9a4716ed23ec] Received event network-vif-unplugged-97528895-56d7-4fcd-b4aa-aff6b1af0155 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 23 12:13:09 compute-0 podman[249633]: 2026-01-23 12:13:09.767657204 +0000 UTC m=+0.095825292 container health_status adf529ba1b6aae11f18bcfacdd7f5850af0b6e6af2250d4a705be9c346f3f5af (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 23 12:13:09 compute-0 podman[249632]: 2026-01-23 12:13:09.782036464 +0000 UTC m=+0.111622708 container health_status 900ef841977ab427bb05b895d10e0cac749b9185cccc7bb7aaf2b3886aa6449a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, build-date=2024-09-18T21:23:30, distribution-scope=public, architecture=x86_64, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container, name=ubi9, release=1214.1726694543, vcs-type=git, container_name=kepler, managed_by=edpm_ansible, config_id=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vendor=Red Hat, Inc., version=9.4, maintainer=Red Hat, Inc., io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Jan 23 12:13:10 compute-0 nova_compute[185173]: 2026-01-23 12:13:10.808 185177 DEBUG nova.network.neutron [-] [instance: c471a51f-aa4e-4533-a6fa-9a4716ed23ec] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 23 12:13:11 compute-0 nova_compute[185173]: 2026-01-23 12:13:11.311 185177 INFO nova.compute.manager [-] [instance: c471a51f-aa4e-4533-a6fa-9a4716ed23ec] Took 6.77 seconds to deallocate network for instance.
Jan 23 12:13:11 compute-0 nova_compute[185173]: 2026-01-23 12:13:11.626 185177 DEBUG oslo_concurrency.lockutils [None req-15b98587-b566-4f57-bca3-39da4e72b79b 1eab809d0fb54c0aad115c1f8dbb943b cb61879fc5554da59f69b8ca9516ae29 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 12:13:11 compute-0 nova_compute[185173]: 2026-01-23 12:13:11.627 185177 DEBUG oslo_concurrency.lockutils [None req-15b98587-b566-4f57-bca3-39da4e72b79b 1eab809d0fb54c0aad115c1f8dbb943b cb61879fc5554da59f69b8ca9516ae29 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 12:13:11 compute-0 nova_compute[185173]: 2026-01-23 12:13:11.649 185177 DEBUG nova.compute.manager [req-4e833c20-5ec8-42ec-82e6-b1923e86d9ea req-1cdfea23-8617-4772-8a22-ac6edc73a464 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] [instance: c471a51f-aa4e-4533-a6fa-9a4716ed23ec] Received event network-vif-plugged-97528895-56d7-4fcd-b4aa-aff6b1af0155 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 23 12:13:11 compute-0 nova_compute[185173]: 2026-01-23 12:13:11.649 185177 DEBUG oslo_concurrency.lockutils [req-4e833c20-5ec8-42ec-82e6-b1923e86d9ea req-1cdfea23-8617-4772-8a22-ac6edc73a464 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] Acquiring lock "c471a51f-aa4e-4533-a6fa-9a4716ed23ec-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 12:13:11 compute-0 nova_compute[185173]: 2026-01-23 12:13:11.650 185177 DEBUG oslo_concurrency.lockutils [req-4e833c20-5ec8-42ec-82e6-b1923e86d9ea req-1cdfea23-8617-4772-8a22-ac6edc73a464 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] Lock "c471a51f-aa4e-4533-a6fa-9a4716ed23ec-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 12:13:11 compute-0 nova_compute[185173]: 2026-01-23 12:13:11.650 185177 DEBUG oslo_concurrency.lockutils [req-4e833c20-5ec8-42ec-82e6-b1923e86d9ea req-1cdfea23-8617-4772-8a22-ac6edc73a464 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] Lock "c471a51f-aa4e-4533-a6fa-9a4716ed23ec-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 12:13:11 compute-0 nova_compute[185173]: 2026-01-23 12:13:11.651 185177 DEBUG nova.compute.manager [req-4e833c20-5ec8-42ec-82e6-b1923e86d9ea req-1cdfea23-8617-4772-8a22-ac6edc73a464 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] [instance: c471a51f-aa4e-4533-a6fa-9a4716ed23ec] No waiting events found dispatching network-vif-plugged-97528895-56d7-4fcd-b4aa-aff6b1af0155 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 23 12:13:11 compute-0 nova_compute[185173]: 2026-01-23 12:13:11.651 185177 WARNING nova.compute.manager [req-4e833c20-5ec8-42ec-82e6-b1923e86d9ea req-1cdfea23-8617-4772-8a22-ac6edc73a464 e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] [instance: c471a51f-aa4e-4533-a6fa-9a4716ed23ec] Received unexpected event network-vif-plugged-97528895-56d7-4fcd-b4aa-aff6b1af0155 for instance with vm_state deleted and task_state None.
Jan 23 12:13:11 compute-0 nova_compute[185173]: 2026-01-23 12:13:11.785 185177 DEBUG nova.compute.manager [req-7a119287-624c-4528-ba3b-c694d7e998a9 req-75e793a7-c7ac-4789-9ea9-49a4cf5fdb9b e8d77aa165a6411cbba157691f6434a7 387313283569438f81fbc14c04aa83e1 - - default default] [instance: c471a51f-aa4e-4533-a6fa-9a4716ed23ec] Received event network-vif-deleted-97528895-56d7-4fcd-b4aa-aff6b1af0155 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 23 12:13:13 compute-0 nova_compute[185173]: 2026-01-23 12:13:13.048 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:13:13 compute-0 ovn_controller[97581]: 2026-01-23T12:13:13Z|00012|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:61:28:24 10.100.0.9
Jan 23 12:13:13 compute-0 ovn_controller[97581]: 2026-01-23T12:13:13Z|00013|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:61:28:24 10.100.0.9
Jan 23 12:13:13 compute-0 nova_compute[185173]: 2026-01-23 12:13:13.561 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:13:14 compute-0 nova_compute[185173]: 2026-01-23 12:13:14.950 185177 DEBUG nova.compute.provider_tree [None req-15b98587-b566-4f57-bca3-39da4e72b79b 1eab809d0fb54c0aad115c1f8dbb943b cb61879fc5554da59f69b8ca9516ae29 - - default default] Inventory has not changed in ProviderTree for provider: 77dd020c-2f5c-40b0-b660-8a95a28aabbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 23 12:13:14 compute-0 podman[249677]: 2026-01-23 12:13:14.97545725 +0000 UTC m=+0.109151816 container health_status 99ee297e6e25b500e7af118e58bbafc761d2fd7202cdfcf4c976c2a99866b5ef (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 23 12:13:15 compute-0 nova_compute[185173]: 2026-01-23 12:13:15.120 185177 DEBUG nova.scheduler.client.report [None req-15b98587-b566-4f57-bca3-39da4e72b79b 1eab809d0fb54c0aad115c1f8dbb943b cb61879fc5554da59f69b8ca9516ae29 - - default default] Inventory has not changed for provider 77dd020c-2f5c-40b0-b660-8a95a28aabbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 23 12:13:15 compute-0 nova_compute[185173]: 2026-01-23 12:13:15.311 185177 DEBUG oslo_concurrency.lockutils [None req-15b98587-b566-4f57-bca3-39da4e72b79b 1eab809d0fb54c0aad115c1f8dbb943b cb61879fc5554da59f69b8ca9516ae29 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 3.684s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 12:13:15 compute-0 nova_compute[185173]: 2026-01-23 12:13:15.334 185177 INFO nova.scheduler.client.report [None req-15b98587-b566-4f57-bca3-39da4e72b79b 1eab809d0fb54c0aad115c1f8dbb943b cb61879fc5554da59f69b8ca9516ae29 - - default default] Deleted allocations for instance c471a51f-aa4e-4533-a6fa-9a4716ed23ec
Jan 23 12:13:15 compute-0 nova_compute[185173]: 2026-01-23 12:13:15.434 185177 DEBUG oslo_concurrency.lockutils [None req-15b98587-b566-4f57-bca3-39da4e72b79b 1eab809d0fb54c0aad115c1f8dbb943b cb61879fc5554da59f69b8ca9516ae29 - - default default] Lock "c471a51f-aa4e-4533-a6fa-9a4716ed23ec" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 12.187s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 12:13:15 compute-0 nova_compute[185173]: 2026-01-23 12:13:15.554 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:13:18 compute-0 nova_compute[185173]: 2026-01-23 12:13:18.050 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:13:18 compute-0 nova_compute[185173]: 2026-01-23 12:13:18.536 185177 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769170383.5345676, c471a51f-aa4e-4533-a6fa-9a4716ed23ec => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 23 12:13:18 compute-0 nova_compute[185173]: 2026-01-23 12:13:18.537 185177 INFO nova.compute.manager [-] [instance: c471a51f-aa4e-4533-a6fa-9a4716ed23ec] VM Stopped (Lifecycle Event)
Jan 23 12:13:18 compute-0 nova_compute[185173]: 2026-01-23 12:13:18.564 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:13:19 compute-0 nova_compute[185173]: 2026-01-23 12:13:19.124 185177 DEBUG nova.compute.manager [None req-47938098-3b09-40b3-94dc-46fdb100121b - - - - - -] [instance: c471a51f-aa4e-4533-a6fa-9a4716ed23ec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 23 12:13:22 compute-0 podman[249700]: 2026-01-23 12:13:22.771115725 +0000 UTC m=+0.104637333 container health_status cde20f10ae383cce1365a41265bac0a75ea71c31a21a1539f187bef9d678e8d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, release=1755695350, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, config_id=openstack_network_exporter, container_name=openstack_network_exporter)
Jan 23 12:13:23 compute-0 nova_compute[185173]: 2026-01-23 12:13:23.055 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:13:23 compute-0 nova_compute[185173]: 2026-01-23 12:13:23.567 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:13:24 compute-0 nova_compute[185173]: 2026-01-23 12:13:24.235 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:13:24 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:13:24.509 106832 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:21:44', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '86:2e:09:c4:2a:53'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 23 12:13:24 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:13:24.510 106832 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 23 12:13:24 compute-0 nova_compute[185173]: 2026-01-23 12:13:24.513 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:13:26 compute-0 nova_compute[185173]: 2026-01-23 12:13:26.230 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:13:27 compute-0 nova_compute[185173]: 2026-01-23 12:13:27.235 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:13:27 compute-0 nova_compute[185173]: 2026-01-23 12:13:27.236 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 23 12:13:28 compute-0 nova_compute[185173]: 2026-01-23 12:13:28.056 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:13:28 compute-0 nova_compute[185173]: 2026-01-23 12:13:28.569 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:13:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:13:29.130 106832 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 12:13:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:13:29.131 106832 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 12:13:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:13:29.132 106832 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 12:13:29 compute-0 nova_compute[185173]: 2026-01-23 12:13:29.365 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:13:29 compute-0 podman[249722]: 2026-01-23 12:13:29.766555017 +0000 UTC m=+0.077148805 container health_status 48bfd3e93cfb033a8917f154ab637a84f3f60f7609564292c230ce848bae7693 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 23 12:13:29 compute-0 podman[249723]: 2026-01-23 12:13:29.772381373 +0000 UTC m=+0.079601296 container health_status 6ec039018dddd109dd56b3f3912ce4a80c166b5fb98c417c5e3cfbbdfbfbeaad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20260120, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, tcib_managed=true)
Jan 23 12:13:29 compute-0 podman[249724]: 2026-01-23 12:13:29.775194153 +0000 UTC m=+0.075143114 container health_status d96827cd9c29e53bbdf4cef10942608e4ba405294733072b4aa624c0238e2ed8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 23 12:13:29 compute-0 podman[201022]: time="2026-01-23T12:13:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 23 12:13:29 compute-0 podman[201022]: @ - - [23/Jan/2026:12:13:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28508 "" "Go-http-client/1.1"
Jan 23 12:13:29 compute-0 podman[201022]: @ - - [23/Jan/2026:12:13:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4390 "" "Go-http-client/1.1"
Jan 23 12:13:30 compute-0 nova_compute[185173]: 2026-01-23 12:13:30.412 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:13:31 compute-0 openstack_network_exporter[204160]: ERROR   12:13:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 23 12:13:31 compute-0 openstack_network_exporter[204160]: 
Jan 23 12:13:31 compute-0 openstack_network_exporter[204160]: ERROR   12:13:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 23 12:13:31 compute-0 openstack_network_exporter[204160]: 
Jan 23 12:13:31 compute-0 nova_compute[185173]: 2026-01-23 12:13:31.909 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 12:13:31 compute-0 nova_compute[185173]: 2026-01-23 12:13:31.909 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 12:13:31 compute-0 nova_compute[185173]: 2026-01-23 12:13:31.910 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 12:13:31 compute-0 nova_compute[185173]: 2026-01-23 12:13:31.910 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 23 12:13:32 compute-0 nova_compute[185173]: 2026-01-23 12:13:32.009 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 12:13:32 compute-0 nova_compute[185173]: 2026-01-23 12:13:32.103 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7/disk --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 12:13:32 compute-0 nova_compute[185173]: 2026-01-23 12:13:32.104 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 12:13:32 compute-0 nova_compute[185173]: 2026-01-23 12:13:32.161 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7/disk --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 12:13:32 compute-0 nova_compute[185173]: 2026-01-23 12:13:32.456 185177 WARNING nova.virt.libvirt.driver [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 23 12:13:32 compute-0 nova_compute[185173]: 2026-01-23 12:13:32.457 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5199MB free_disk=72.35220336914062GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 23 12:13:32 compute-0 nova_compute[185173]: 2026-01-23 12:13:32.458 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 12:13:32 compute-0 nova_compute[185173]: 2026-01-23 12:13:32.458 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 12:13:32 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:13:32.512 106832 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9a136bfd-345f-428f-a7f6-d55531120214, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 23 12:13:33 compute-0 nova_compute[185173]: 2026-01-23 12:13:33.058 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:13:33 compute-0 nova_compute[185173]: 2026-01-23 12:13:33.571 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:13:36 compute-0 podman[249788]: 2026-01-23 12:13:36.87145025 +0000 UTC m=+0.125051494 container health_status 1cc877fed4914980324cf4c0d6ba23743fd113442cee4d49cc1a59e402757170 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 23 12:13:38 compute-0 nova_compute[185173]: 2026-01-23 12:13:38.061 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:13:38 compute-0 nova_compute[185173]: 2026-01-23 12:13:38.575 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:13:40 compute-0 podman[249813]: 2026-01-23 12:13:40.747560497 +0000 UTC m=+0.077336258 container health_status 900ef841977ab427bb05b895d10e0cac749b9185cccc7bb7aaf2b3886aa6449a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, build-date=2024-09-18T21:23:30, config_id=kepler, container_name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, name=ubi9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., release-0.7.12=, version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, managed_by=edpm_ansible, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, distribution-scope=public)
Jan 23 12:13:40 compute-0 podman[249814]: 2026-01-23 12:13:40.765149358 +0000 UTC m=+0.091975576 container health_status adf529ba1b6aae11f18bcfacdd7f5850af0b6e6af2250d4a705be9c346f3f5af (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ceilometer_agent_ipmi)
Jan 23 12:13:43 compute-0 nova_compute[185173]: 2026-01-23 12:13:43.063 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:13:43 compute-0 nova_compute[185173]: 2026-01-23 12:13:43.314 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Instance 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 23 12:13:43 compute-0 nova_compute[185173]: 2026-01-23 12:13:43.314 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 23 12:13:43 compute-0 nova_compute[185173]: 2026-01-23 12:13:43.314 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=79GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 23 12:13:43 compute-0 nova_compute[185173]: 2026-01-23 12:13:43.577 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:13:44 compute-0 nova_compute[185173]: 2026-01-23 12:13:44.196 185177 DEBUG nova.compute.provider_tree [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Inventory has not changed in ProviderTree for provider: 77dd020c-2f5c-40b0-b660-8a95a28aabbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 23 12:13:44 compute-0 nova_compute[185173]: 2026-01-23 12:13:44.622 185177 DEBUG nova.scheduler.client.report [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Inventory has not changed for provider 77dd020c-2f5c-40b0-b660-8a95a28aabbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 23 12:13:45 compute-0 nova_compute[185173]: 2026-01-23 12:13:45.606 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 23 12:13:45 compute-0 nova_compute[185173]: 2026-01-23 12:13:45.607 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 13.149s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 12:13:45 compute-0 nova_compute[185173]: 2026-01-23 12:13:45.608 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:13:45 compute-0 podman[249852]: 2026-01-23 12:13:45.758113931 +0000 UTC m=+0.088482789 container health_status 99ee297e6e25b500e7af118e58bbafc761d2fd7202cdfcf4c976c2a99866b5ef (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 23 12:13:46 compute-0 nova_compute[185173]: 2026-01-23 12:13:46.042 185177 DEBUG nova.objects.instance [None req-84de1880-f137-444b-b8dd-06b25272aed0 e0e1cef9ff584692b12674d39ab8e57c 219dee4c2af34d05ac6e31aa65c35134 - - default default] Lazy-loading 'flavor' on Instance uuid 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 23 12:13:46 compute-0 nova_compute[185173]: 2026-01-23 12:13:46.205 185177 DEBUG oslo_concurrency.lockutils [None req-84de1880-f137-444b-b8dd-06b25272aed0 e0e1cef9ff584692b12674d39ab8e57c 219dee4c2af34d05ac6e31aa65c35134 - - default default] Acquiring lock "refresh_cache-9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 23 12:13:46 compute-0 nova_compute[185173]: 2026-01-23 12:13:46.206 185177 DEBUG oslo_concurrency.lockutils [None req-84de1880-f137-444b-b8dd-06b25272aed0 e0e1cef9ff584692b12674d39ab8e57c 219dee4c2af34d05ac6e31aa65c35134 - - default default] Acquired lock "refresh_cache-9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 23 12:13:48 compute-0 nova_compute[185173]: 2026-01-23 12:13:48.002 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:13:48 compute-0 nova_compute[185173]: 2026-01-23 12:13:48.003 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 23 12:13:48 compute-0 nova_compute[185173]: 2026-01-23 12:13:48.003 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 23 12:13:48 compute-0 nova_compute[185173]: 2026-01-23 12:13:48.065 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:13:48 compute-0 nova_compute[185173]: 2026-01-23 12:13:48.579 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:13:49 compute-0 ovn_controller[97581]: 2026-01-23T12:13:49Z|00083|ovsdb_idl|WARN|transaction error: {"details":"Transaction causes multiple rows in \"MAC_Binding\" table to have identical values (lrp-513363b7-f0e1-42a1-beac-ac1f20a48eec and \"192.168.122.80\") for index on columns \"logical_port\" and \"ip\".  First row, with UUID c4ff218a-b1d5-488d-be83-27b5b2b20fea, was inserted by this transaction.  Second row, with UUID 55a45213-ed41-4c89-b9ce-fc329feca481, existed in the database before this transaction and was not modified by the transaction.","error":"constraint violation"}
Jan 23 12:13:49 compute-0 ovn_controller[97581]: 2026-01-23T12:13:49Z|00084|main|INFO|OVNSB commit failed, force recompute next time.
Jan 23 12:13:49 compute-0 ovn_controller[97581]: 2026-01-23T12:13:49Z|00085|binding|INFO|Releasing lport 9cbf67d5-0442-4a05-87a4-97f78502296a from this chassis (sb_readonly=0)
Jan 23 12:13:49 compute-0 nova_compute[185173]: 2026-01-23 12:13:49.588 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:13:50 compute-0 nova_compute[185173]: 2026-01-23 12:13:50.001 185177 ERROR nova.servicegroup.drivers.db [-] Unexpected error while reporting service status: oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2013, 'Lost connection to MySQL server during query')
Jan 23 12:13:50 compute-0 nova_compute[185173]: (Background on this error at: https://sqlalche.me/e/14/e3q8)
Jan 23 12:13:50 compute-0 nova_compute[185173]: ['Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 653, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 587, in connect\n    self._get_server_information()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 969, in _get_server_information\n    packet = self._read_packet()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 646, in _read_packet\n    packet_header = self._read_bytes(4)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 698, in _read_bytes\n    raise err.OperationalError(\n', "pymysql.err.OperationalError: (2013, 'Lost connection to MySQL server during query')\n", '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 226, in wrapper\n    return fn(self, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/service.py", line 505, in save\n    db_service = db.service_update(self._context, self.id, updates)\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 154, in wrapper\n    ectxt.value = e.inner_exc\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n    self.force_reraise()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n    raise self.value\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 142, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 207, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 563, in service_update\n    service_ref = service_get(context, service_id)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 224, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 398, in service_get\n    result = query.first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2824, in first\n    return self.limit(1)._iter().first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2916, in _iter\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 653, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 587, in connect\n    self._get_server_information()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 969, in _get_server_information\n    packet = self._read_packet()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 646, in _read_packet\n    packet_header = self._read_bytes(4)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 698, in _read_bytes\n    raise err.OperationalError(\n', "oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2013, 'Lost connection to MySQL server during query')\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n"].
Jan 23 12:13:50 compute-0 nova_compute[185173]: 2026-01-23 12:13:50.001 185177 ERROR nova.servicegroup.drivers.db Traceback (most recent call last):
Jan 23 12:13:50 compute-0 nova_compute[185173]: 2026-01-23 12:13:50.001 185177 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py", line 92, in _report_state
Jan 23 12:13:50 compute-0 nova_compute[185173]: 2026-01-23 12:13:50.001 185177 ERROR nova.servicegroup.drivers.db     service.service_ref.save()
Jan 23 12:13:50 compute-0 nova_compute[185173]: 2026-01-23 12:13:50.001 185177 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 209, in wrapper
Jan 23 12:13:50 compute-0 nova_compute[185173]: 2026-01-23 12:13:50.001 185177 ERROR nova.servicegroup.drivers.db     updates, result = self.indirection_api.object_action(
Jan 23 12:13:50 compute-0 nova_compute[185173]: 2026-01-23 12:13:50.001 185177 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/nova/conductor/rpcapi.py", line 247, in object_action
Jan 23 12:13:50 compute-0 nova_compute[185173]: 2026-01-23 12:13:50.001 185177 ERROR nova.servicegroup.drivers.db     return cctxt.call(context, 'object_action', objinst=objinst,
Jan 23 12:13:50 compute-0 nova_compute[185173]: 2026-01-23 12:13:50.001 185177 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/client.py", line 190, in call
Jan 23 12:13:50 compute-0 nova_compute[185173]: 2026-01-23 12:13:50.001 185177 ERROR nova.servicegroup.drivers.db     result = self.transport._send(
Jan 23 12:13:50 compute-0 nova_compute[185173]: 2026-01-23 12:13:50.001 185177 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 123, in _send
Jan 23 12:13:50 compute-0 nova_compute[185173]: 2026-01-23 12:13:50.001 185177 ERROR nova.servicegroup.drivers.db     return self._driver.send(target, ctxt, message,
Jan 23 12:13:50 compute-0 nova_compute[185173]: 2026-01-23 12:13:50.001 185177 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 689, in send
Jan 23 12:13:50 compute-0 nova_compute[185173]: 2026-01-23 12:13:50.001 185177 ERROR nova.servicegroup.drivers.db     return self._send(target, ctxt, message, wait_for_reply, timeout,
Jan 23 12:13:50 compute-0 nova_compute[185173]: 2026-01-23 12:13:50.001 185177 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 681, in _send
Jan 23 12:13:50 compute-0 nova_compute[185173]: 2026-01-23 12:13:50.001 185177 ERROR nova.servicegroup.drivers.db     raise result
Jan 23 12:13:50 compute-0 nova_compute[185173]: 2026-01-23 12:13:50.001 185177 ERROR nova.servicegroup.drivers.db oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2013, 'Lost connection to MySQL server during query')
Jan 23 12:13:50 compute-0 nova_compute[185173]: 2026-01-23 12:13:50.001 185177 ERROR nova.servicegroup.drivers.db (Background on this error at: https://sqlalche.me/e/14/e3q8)
Jan 23 12:13:50 compute-0 nova_compute[185173]: 2026-01-23 12:13:50.001 185177 ERROR nova.servicegroup.drivers.db ['Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 653, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 587, in connect\n    self._get_server_information()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 969, in _get_server_information\n    packet = self._read_packet()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 646, in _read_packet\n    packet_header = self._read_bytes(4)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 698, in _read_bytes\n    raise err.OperationalError(\n', "pymysql.err.OperationalError: (2013, 'Lost connection to MySQL server during query')\n", '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 226, in wrapper\n    return fn(self, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/service.py", line 505, in save\n    db_service = db.service_update(self._context, self.id, updates)\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 154, in wrapper\n    ectxt.value = e.inner_exc\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n    self.force_reraise()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n    raise self.value\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 142, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 207, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 563, in service_update\n    service_ref = service_get(context, service_id)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 224, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 398, in service_get\n    result = query.first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2824, in first\n    return self.limit(1)._iter().first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2916, in _iter\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 653, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 587, in connect\n    self._get_server_information()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 969, in _get_server_information\n    packet = self._read_packet()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 646, in _read_packet\n    packet_header = self._read_bytes(4)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 698, in _read_bytes\n    raise err.OperationalError(\n', "oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2013, 'Lost connection to MySQL server during query')\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n"].
Jan 23 12:13:50 compute-0 nova_compute[185173]: 2026-01-23 12:13:50.001 185177 ERROR nova.servicegroup.drivers.db 
Jan 23 12:13:50 compute-0 rsyslogd[235472]: message too long (8931) with configured size 8096, begin of message is: ['Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-pack [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 23 12:13:50 compute-0 rsyslogd[235472]: message too long (8997) with configured size 8096, begin of message is: 2026-01-23 12:13:50.001 185177 ERROR nova.servicegroup.drivers.db ['Traceback (m [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 23 12:13:51 compute-0 nova_compute[185173]: 2026-01-23 12:13:51.575 185177 DEBUG neutronclient.v2_0.client [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Error message: {"message": "The server is currently unavailable. Please try again at a later time.<br /><br />\nThe Keystone service is temporarily unavailable.\n\n", "code": "503 Service Unavailable", "title": "Service Unavailable"} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262
Jan 23 12:13:51 compute-0 nova_compute[185173]: 2026-01-23 12:13:51.576 185177 ERROR nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] [instance: 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7] An error occurred while refreshing the network cache.: neutronclient.common.exceptions.ServiceUnavailable: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 23 12:13:51 compute-0 nova_compute[185173]: The Keystone service is temporarily unavailable.
Jan 23 12:13:51 compute-0 nova_compute[185173]: 
Jan 23 12:13:51 compute-0 nova_compute[185173]: 
Jan 23 12:13:51 compute-0 nova_compute[185173]: Neutron server returns request_ids: ['req-99cdec12-18e3-4356-84a0-b4dde2074100']
Jan 23 12:13:51 compute-0 nova_compute[185173]: 2026-01-23 12:13:51.576 185177 ERROR nova.compute.manager [instance: 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7] Traceback (most recent call last):
Jan 23 12:13:51 compute-0 nova_compute[185173]: 2026-01-23 12:13:51.576 185177 ERROR nova.compute.manager [instance: 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7]   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 9921, in _heal_instance_info_cache
Jan 23 12:13:51 compute-0 nova_compute[185173]: 2026-01-23 12:13:51.576 185177 ERROR nova.compute.manager [instance: 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7]     self._require_nw_info_update(context, instance):
Jan 23 12:13:51 compute-0 nova_compute[185173]: 2026-01-23 12:13:51.576 185177 ERROR nova.compute.manager [instance: 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7]   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 9828, in _require_nw_info_update
Jan 23 12:13:51 compute-0 nova_compute[185173]: 2026-01-23 12:13:51.576 185177 ERROR nova.compute.manager [instance: 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7]     ports = self.network_api.list_ports(context, **search_opts)
Jan 23 12:13:51 compute-0 nova_compute[185173]: 2026-01-23 12:13:51.576 185177 ERROR nova.compute.manager [instance: 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7]   File "/usr/lib/python3.9/site-packages/nova/network/neutron.py", line 1944, in list_ports
Jan 23 12:13:51 compute-0 nova_compute[185173]: 2026-01-23 12:13:51.576 185177 ERROR nova.compute.manager [instance: 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7]     return get_client(context).list_ports(**search_opts)
Jan 23 12:13:51 compute-0 nova_compute[185173]: 2026-01-23 12:13:51.576 185177 ERROR nova.compute.manager [instance: 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7]   File "/usr/lib/python3.9/site-packages/nova/network/neutron.py", line 196, in wrapper
Jan 23 12:13:51 compute-0 nova_compute[185173]: 2026-01-23 12:13:51.576 185177 ERROR nova.compute.manager [instance: 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7]     ret = obj(*args, **kwargs)
Jan 23 12:13:51 compute-0 nova_compute[185173]: 2026-01-23 12:13:51.576 185177 ERROR nova.compute.manager [instance: 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7]   File "/usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py", line 815, in list_ports
Jan 23 12:13:51 compute-0 nova_compute[185173]: 2026-01-23 12:13:51.576 185177 ERROR nova.compute.manager [instance: 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7]     return self.list('ports', self.ports_path, retrieve_all,
Jan 23 12:13:51 compute-0 nova_compute[185173]: 2026-01-23 12:13:51.576 185177 ERROR nova.compute.manager [instance: 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7]   File "/usr/lib/python3.9/site-packages/nova/network/neutron.py", line 196, in wrapper
Jan 23 12:13:51 compute-0 nova_compute[185173]: 2026-01-23 12:13:51.576 185177 ERROR nova.compute.manager [instance: 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7]     ret = obj(*args, **kwargs)
Jan 23 12:13:51 compute-0 nova_compute[185173]: 2026-01-23 12:13:51.576 185177 ERROR nova.compute.manager [instance: 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7]   File "/usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py", line 372, in list
Jan 23 12:13:51 compute-0 nova_compute[185173]: 2026-01-23 12:13:51.576 185177 ERROR nova.compute.manager [instance: 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7]     for r in self._pagination(collection, path, **params):
Jan 23 12:13:51 compute-0 nova_compute[185173]: 2026-01-23 12:13:51.576 185177 ERROR nova.compute.manager [instance: 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7]   File "/usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py", line 387, in _pagination
Jan 23 12:13:51 compute-0 nova_compute[185173]: 2026-01-23 12:13:51.576 185177 ERROR nova.compute.manager [instance: 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7]     res = self.get(path, params=params)
Jan 23 12:13:51 compute-0 nova_compute[185173]: 2026-01-23 12:13:51.576 185177 ERROR nova.compute.manager [instance: 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7]   File "/usr/lib/python3.9/site-packages/nova/network/neutron.py", line 196, in wrapper
Jan 23 12:13:51 compute-0 nova_compute[185173]: 2026-01-23 12:13:51.576 185177 ERROR nova.compute.manager [instance: 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7]     ret = obj(*args, **kwargs)
Jan 23 12:13:51 compute-0 nova_compute[185173]: 2026-01-23 12:13:51.576 185177 ERROR nova.compute.manager [instance: 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7]   File "/usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py", line 356, in get
Jan 23 12:13:51 compute-0 nova_compute[185173]: 2026-01-23 12:13:51.576 185177 ERROR nova.compute.manager [instance: 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7]     return self.retry_request("GET", action, body=body,
Jan 23 12:13:51 compute-0 nova_compute[185173]: 2026-01-23 12:13:51.576 185177 ERROR nova.compute.manager [instance: 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7]   File "/usr/lib/python3.9/site-packages/nova/network/neutron.py", line 196, in wrapper
Jan 23 12:13:51 compute-0 nova_compute[185173]: 2026-01-23 12:13:51.576 185177 ERROR nova.compute.manager [instance: 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7]     ret = obj(*args, **kwargs)
Jan 23 12:13:51 compute-0 nova_compute[185173]: 2026-01-23 12:13:51.576 185177 ERROR nova.compute.manager [instance: 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7]   File "/usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py", line 333, in retry_request
Jan 23 12:13:51 compute-0 nova_compute[185173]: 2026-01-23 12:13:51.576 185177 ERROR nova.compute.manager [instance: 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7]     return self.do_request(method, action, body=body,
Jan 23 12:13:51 compute-0 nova_compute[185173]: 2026-01-23 12:13:51.576 185177 ERROR nova.compute.manager [instance: 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7]   File "/usr/lib/python3.9/site-packages/nova/network/neutron.py", line 196, in wrapper
Jan 23 12:13:51 compute-0 nova_compute[185173]: 2026-01-23 12:13:51.576 185177 ERROR nova.compute.manager [instance: 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7]     ret = obj(*args, **kwargs)
Jan 23 12:13:51 compute-0 nova_compute[185173]: 2026-01-23 12:13:51.576 185177 ERROR nova.compute.manager [instance: 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7]   File "/usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py", line 297, in do_request
Jan 23 12:13:51 compute-0 nova_compute[185173]: 2026-01-23 12:13:51.576 185177 ERROR nova.compute.manager [instance: 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7]     self._handle_fault_response(status_code, replybody, resp)
Jan 23 12:13:51 compute-0 nova_compute[185173]: 2026-01-23 12:13:51.576 185177 ERROR nova.compute.manager [instance: 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7]   File "/usr/lib/python3.9/site-packages/nova/network/neutron.py", line 196, in wrapper
Jan 23 12:13:51 compute-0 nova_compute[185173]: 2026-01-23 12:13:51.576 185177 ERROR nova.compute.manager [instance: 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7]     ret = obj(*args, **kwargs)
Jan 23 12:13:51 compute-0 nova_compute[185173]: 2026-01-23 12:13:51.576 185177 ERROR nova.compute.manager [instance: 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7]   File "/usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py", line 272, in _handle_fault_response
Jan 23 12:13:51 compute-0 nova_compute[185173]: 2026-01-23 12:13:51.576 185177 ERROR nova.compute.manager [instance: 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7]     exception_handler_v20(status_code, error_body)
Jan 23 12:13:51 compute-0 nova_compute[185173]: 2026-01-23 12:13:51.576 185177 ERROR nova.compute.manager [instance: 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7]   File "/usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py", line 90, in exception_handler_v20
Jan 23 12:13:51 compute-0 nova_compute[185173]: 2026-01-23 12:13:51.576 185177 ERROR nova.compute.manager [instance: 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7]     raise client_exc(message=error_message,
Jan 23 12:13:51 compute-0 nova_compute[185173]: 2026-01-23 12:13:51.576 185177 ERROR nova.compute.manager [instance: 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7] neutronclient.common.exceptions.ServiceUnavailable: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 23 12:13:51 compute-0 nova_compute[185173]: 2026-01-23 12:13:51.576 185177 ERROR nova.compute.manager [instance: 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7] The Keystone service is temporarily unavailable.
Jan 23 12:13:51 compute-0 nova_compute[185173]: 2026-01-23 12:13:51.576 185177 ERROR nova.compute.manager [instance: 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7] 
Jan 23 12:13:51 compute-0 nova_compute[185173]: 2026-01-23 12:13:51.576 185177 ERROR nova.compute.manager [instance: 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7] 
Jan 23 12:13:51 compute-0 nova_compute[185173]: 2026-01-23 12:13:51.576 185177 ERROR nova.compute.manager [instance: 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7] Neutron server returns request_ids: ['req-99cdec12-18e3-4356-84a0-b4dde2074100']
Jan 23 12:13:51 compute-0 nova_compute[185173]: 2026-01-23 12:13:51.576 185177 ERROR nova.compute.manager [instance: 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7] 
Jan 23 12:13:51 compute-0 nova_compute[185173]: 2026-01-23 12:13:51.581 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:13:51 compute-0 nova_compute[185173]: 2026-01-23 12:13:51.581 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:13:51 compute-0 nova_compute[185173]: 2026-01-23 12:13:51.582 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:13:51 compute-0 nova_compute[185173]: 2026-01-23 12:13:51.582 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:13:51 compute-0 nova_compute[185173]: 2026-01-23 12:13:51.583 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:13:51 compute-0 nova_compute[185173]: 2026-01-23 12:13:51.583 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 23 12:13:51 compute-0 nova_compute[185173]: 2026-01-23 12:13:51.650 185177 DEBUG neutronclient.v2_0.client [None req-84de1880-f137-444b-b8dd-06b25272aed0 e0e1cef9ff584692b12674d39ab8e57c 219dee4c2af34d05ac6e31aa65c35134 - - default default] Error message: {"message": "The server is currently unavailable. Please try again at a later time.<br /><br />\nThe Keystone service is temporarily unavailable.\n\n", "code": "503 Service Unavailable", "title": "Service Unavailable"} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262
Jan 23 12:13:51 compute-0 nova_compute[185173]: 2026-01-23 12:13:51.651 185177 DEBUG oslo_concurrency.lockutils [None req-84de1880-f137-444b-b8dd-06b25272aed0 e0e1cef9ff584692b12674d39ab8e57c 219dee4c2af34d05ac6e31aa65c35134 - - default default] Releasing lock "refresh_cache-9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 23 12:13:53 compute-0 nova_compute[185173]: 2026-01-23 12:13:53.068 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:13:53 compute-0 nova_compute[185173]: 2026-01-23 12:13:53.581 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:13:53 compute-0 podman[249875]: 2026-01-23 12:13:53.743170359 +0000 UTC m=+0.078494598 container health_status cde20f10ae383cce1365a41265bac0a75ea71c31a21a1539f187bef9d678e8d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, managed_by=edpm_ansible, distribution-scope=public, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, name=ubi9-minimal, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, vcs-type=git, config_id=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41)
Jan 23 12:13:53 compute-0 nova_compute[185173]: 2026-01-23 12:13:53.935 185177 ERROR root [None req-84de1880-f137-444b-b8dd-06b25272aed0 e0e1cef9ff584692b12674d39ab8e57c 219dee4c2af34d05ac6e31aa65c35134 - - default default] Original exception being dropped: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 203, in decorated_function\n    return function(self, context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 6552, in add_fixed_ip_to_instance\n    network_info = self.network_api.add_fixed_ip_to_instance(context,\n', '  File "/usr/lib/python3.9/site-packages/nova/network/neutron.py", line 165, in wrapper\n    res = f(self, context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/network/neutron.py", line 2084, in add_fixed_ip_to_instance\n    data = neutron.list_subnets(**search_opts)\n', '  File "/usr/lib/python3.9/site-packages/nova/network/neutron.py", line 196, in wrapper\n    ret = obj(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py", line 882, in list_subnets\n    return self.list(\'subnets\', self.subnets_path, retrieve_all,\n', '  File "/usr/lib/python3.9/site-packages/nova/network/neutron.py", line 196, in wrapper\n    ret = obj(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py", line 372, in list\n    for r in self._pagination(collection, path, **params):\n', '  File "/usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py", line 387, in _pagination\n    res = self.get(path, params=params)\n', '  File "/usr/lib/python3.9/site-packages/nova/network/neutron.py", line 196, in wrapper\n    ret = obj(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py", line 356, in get\n    return self.retry_request("GET", action, body=body,\n', '  File "/usr/lib/python3.9/site-packages/nova/network/neutron.py", line 196, in wrapper\n    ret = obj(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py", line 333, in retry_request\n    return self.do_request(method, action, body=body,\n', '  File "/usr/lib/python3.9/site-packages/nova/network/neutron.py", line 196, in wrapper\n    ret = obj(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py", line 297, in do_request\n    self._handle_fault_response(status_code, replybody, resp)\n', '  File "/usr/lib/python3.9/site-packages/nova/network/neutron.py", line 196, in wrapper\n    ret = obj(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py", line 272, in _handle_fault_response\n    exception_handler_v20(status_code, error_body)\n', '  File "/usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py", line 90, in exception_handler_v20\n    raise client_exc(message=error_message,\n', "neutronclient.common.exceptions.ServiceUnavailable: The server is currently unavailable. Please try again at a later time.<br /><br />\nThe Keystone service is temporarily unavailable.\n\n\nNeutron server returns request_ids: ['req-eaf9bd77-b719-439a-b4f3-2817815980c5']\n"]: oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Jan 23 12:13:53 compute-0 nova_compute[185173]: 2026-01-23 12:13:53.938 185177 ERROR oslo_messaging.rpc.server [None req-84de1880-f137-444b-b8dd-06b25272aed0 e0e1cef9ff584692b12674d39ab8e57c 219dee4c2af34d05ac6e31aa65c35134 - - default default] Exception during message handling: oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Jan 23 12:13:53 compute-0 nova_compute[185173]: (Background on this error at: https://sqlalche.me/e/14/e3q8)
Jan 23 12:13:53 compute-0 nova_compute[185173]: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 653, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 226, in wrapper\n    return fn(self, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/instance_fault.py", line 76, in create\n    db_fault = db.instance_fault_create(self._context, values)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 207, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 3823, in instance_fault_create\n    fault_ref.save(context.session)\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/sqlalchemy/models.py", line 38, in save\n    session.flush()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 3444, in flush\n    self._flush(objects)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 3584, in _flush\n    transaction.rollback(_capture_exception=True)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 3544, in _flush\n    flush_context.execute()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/unitofwork.py", line 456, in execute\n    rec.execute(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/unitofwork.py", line 630, in execute\n    util.preloaded.orm_persistence.save_obj(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/persistence.py", line 212, in save_obj\n    for (\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/persistence.py", line 373, in _organize_states_for_save\n    for state, dict_, mapper, connection in _connections_for_states(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/persistence.py", line 1737, in _connections_for_states\n    connection = uowtransaction.transaction.connection(base_mapper)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 626, in connection\n    return self._connection_for_bind(bind, execution_options)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 735, in _connection_for_bind\n    conn = self._parent._connection_for_bind(bind, execution_options)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 653, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n'].
Jan 23 12:13:53 compute-0 nova_compute[185173]: 2026-01-23 12:13:53.938 185177 ERROR oslo_messaging.rpc.server Traceback (most recent call last):
Jan 23 12:13:53 compute-0 nova_compute[185173]: 2026-01-23 12:13:53.938 185177 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 203, in decorated_function
Jan 23 12:13:53 compute-0 nova_compute[185173]: 2026-01-23 12:13:53.938 185177 ERROR oslo_messaging.rpc.server     return function(self, context, *args, **kwargs)
Jan 23 12:13:53 compute-0 nova_compute[185173]: 2026-01-23 12:13:53.938 185177 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 6552, in add_fixed_ip_to_instance
Jan 23 12:13:53 compute-0 nova_compute[185173]: 2026-01-23 12:13:53.938 185177 ERROR oslo_messaging.rpc.server     network_info = self.network_api.add_fixed_ip_to_instance(context,
Jan 23 12:13:53 compute-0 nova_compute[185173]: 2026-01-23 12:13:53.938 185177 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/network/neutron.py", line 165, in wrapper
Jan 23 12:13:53 compute-0 nova_compute[185173]: 2026-01-23 12:13:53.938 185177 ERROR oslo_messaging.rpc.server     res = f(self, context, *args, **kwargs)
Jan 23 12:13:53 compute-0 nova_compute[185173]: 2026-01-23 12:13:53.938 185177 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/network/neutron.py", line 2084, in add_fixed_ip_to_instance
Jan 23 12:13:53 compute-0 nova_compute[185173]: 2026-01-23 12:13:53.938 185177 ERROR oslo_messaging.rpc.server     data = neutron.list_subnets(**search_opts)
Jan 23 12:13:53 compute-0 nova_compute[185173]: 2026-01-23 12:13:53.938 185177 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/network/neutron.py", line 196, in wrapper
Jan 23 12:13:53 compute-0 nova_compute[185173]: 2026-01-23 12:13:53.938 185177 ERROR oslo_messaging.rpc.server     ret = obj(*args, **kwargs)
Jan 23 12:13:53 compute-0 nova_compute[185173]: 2026-01-23 12:13:53.938 185177 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py", line 882, in list_subnets
Jan 23 12:13:53 compute-0 nova_compute[185173]: 2026-01-23 12:13:53.938 185177 ERROR oslo_messaging.rpc.server     return self.list('subnets', self.subnets_path, retrieve_all,
Jan 23 12:13:53 compute-0 nova_compute[185173]: 2026-01-23 12:13:53.938 185177 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/network/neutron.py", line 196, in wrapper
Jan 23 12:13:53 compute-0 nova_compute[185173]: 2026-01-23 12:13:53.938 185177 ERROR oslo_messaging.rpc.server     ret = obj(*args, **kwargs)
Jan 23 12:13:53 compute-0 nova_compute[185173]: 2026-01-23 12:13:53.938 185177 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py", line 372, in list
Jan 23 12:13:53 compute-0 nova_compute[185173]: 2026-01-23 12:13:53.938 185177 ERROR oslo_messaging.rpc.server     for r in self._pagination(collection, path, **params):
Jan 23 12:13:53 compute-0 nova_compute[185173]: 2026-01-23 12:13:53.938 185177 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py", line 387, in _pagination
Jan 23 12:13:53 compute-0 nova_compute[185173]: 2026-01-23 12:13:53.938 185177 ERROR oslo_messaging.rpc.server     res = self.get(path, params=params)
Jan 23 12:13:53 compute-0 nova_compute[185173]: 2026-01-23 12:13:53.938 185177 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/network/neutron.py", line 196, in wrapper
Jan 23 12:13:53 compute-0 nova_compute[185173]: 2026-01-23 12:13:53.938 185177 ERROR oslo_messaging.rpc.server     ret = obj(*args, **kwargs)
Jan 23 12:13:53 compute-0 nova_compute[185173]: 2026-01-23 12:13:53.938 185177 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py", line 356, in get
Jan 23 12:13:53 compute-0 nova_compute[185173]: 2026-01-23 12:13:53.938 185177 ERROR oslo_messaging.rpc.server     return self.retry_request("GET", action, body=body,
Jan 23 12:13:53 compute-0 nova_compute[185173]: 2026-01-23 12:13:53.938 185177 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/network/neutron.py", line 196, in wrapper
Jan 23 12:13:53 compute-0 nova_compute[185173]: 2026-01-23 12:13:53.938 185177 ERROR oslo_messaging.rpc.server     ret = obj(*args, **kwargs)
Jan 23 12:13:53 compute-0 nova_compute[185173]: 2026-01-23 12:13:53.938 185177 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py", line 333, in retry_request
Jan 23 12:13:53 compute-0 nova_compute[185173]: 2026-01-23 12:13:53.938 185177 ERROR oslo_messaging.rpc.server     return self.do_request(method, action, body=body,
Jan 23 12:13:53 compute-0 nova_compute[185173]: 2026-01-23 12:13:53.938 185177 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/network/neutron.py", line 196, in wrapper
Jan 23 12:13:53 compute-0 nova_compute[185173]: 2026-01-23 12:13:53.938 185177 ERROR oslo_messaging.rpc.server     ret = obj(*args, **kwargs)
Jan 23 12:13:53 compute-0 nova_compute[185173]: 2026-01-23 12:13:53.938 185177 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py", line 297, in do_request
Jan 23 12:13:53 compute-0 nova_compute[185173]: 2026-01-23 12:13:53.938 185177 ERROR oslo_messaging.rpc.server     self._handle_fault_response(status_code, replybody, resp)
Jan 23 12:13:53 compute-0 nova_compute[185173]: 2026-01-23 12:13:53.938 185177 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/network/neutron.py", line 196, in wrapper
Jan 23 12:13:53 compute-0 nova_compute[185173]: 2026-01-23 12:13:53.938 185177 ERROR oslo_messaging.rpc.server     ret = obj(*args, **kwargs)
Jan 23 12:13:53 compute-0 nova_compute[185173]: 2026-01-23 12:13:53.938 185177 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py", line 272, in _handle_fault_response
Jan 23 12:13:53 compute-0 nova_compute[185173]: 2026-01-23 12:13:53.938 185177 ERROR oslo_messaging.rpc.server     exception_handler_v20(status_code, error_body)
Jan 23 12:13:53 compute-0 nova_compute[185173]: 2026-01-23 12:13:53.938 185177 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py", line 90, in exception_handler_v20
Jan 23 12:13:53 compute-0 nova_compute[185173]: 2026-01-23 12:13:53.938 185177 ERROR oslo_messaging.rpc.server     raise client_exc(message=error_message,
Jan 23 12:13:53 compute-0 nova_compute[185173]: 2026-01-23 12:13:53.938 185177 ERROR oslo_messaging.rpc.server neutronclient.common.exceptions.ServiceUnavailable: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 23 12:13:53 compute-0 nova_compute[185173]: 2026-01-23 12:13:53.938 185177 ERROR oslo_messaging.rpc.server The Keystone service is temporarily unavailable.
Jan 23 12:13:53 compute-0 nova_compute[185173]: 2026-01-23 12:13:53.938 185177 ERROR oslo_messaging.rpc.server 
Jan 23 12:13:53 compute-0 nova_compute[185173]: 2026-01-23 12:13:53.938 185177 ERROR oslo_messaging.rpc.server 
Jan 23 12:13:53 compute-0 nova_compute[185173]: 2026-01-23 12:13:53.938 185177 ERROR oslo_messaging.rpc.server Neutron server returns request_ids: ['req-eaf9bd77-b719-439a-b4f3-2817815980c5']
Jan 23 12:13:53 compute-0 nova_compute[185173]: 2026-01-23 12:13:53.938 185177 ERROR oslo_messaging.rpc.server 
Jan 23 12:13:53 compute-0 nova_compute[185173]: 2026-01-23 12:13:53.938 185177 ERROR oslo_messaging.rpc.server During handling of the above exception, another exception occurred:
Jan 23 12:13:53 compute-0 nova_compute[185173]: 2026-01-23 12:13:53.938 185177 ERROR oslo_messaging.rpc.server 
Jan 23 12:13:53 compute-0 nova_compute[185173]: 2026-01-23 12:13:53.938 185177 ERROR oslo_messaging.rpc.server Traceback (most recent call last):
Jan 23 12:13:53 compute-0 nova_compute[185173]: 2026-01-23 12:13:53.938 185177 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/server.py", line 165, in _process_incoming
Jan 23 12:13:53 compute-0 nova_compute[185173]: 2026-01-23 12:13:53.938 185177 ERROR oslo_messaging.rpc.server     res = self.dispatcher.dispatch(message)
Jan 23 12:13:53 compute-0 nova_compute[185173]: 2026-01-23 12:13:53.938 185177 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/dispatcher.py", line 309, in dispatch
Jan 23 12:13:53 compute-0 nova_compute[185173]: 2026-01-23 12:13:53.938 185177 ERROR oslo_messaging.rpc.server     return self._do_dispatch(endpoint, method, ctxt, args)
Jan 23 12:13:53 compute-0 nova_compute[185173]: 2026-01-23 12:13:53.938 185177 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/dispatcher.py", line 229, in _do_dispatch
Jan 23 12:13:53 compute-0 nova_compute[185173]: 2026-01-23 12:13:53.938 185177 ERROR oslo_messaging.rpc.server     result = func(ctxt, **new_args)
Jan 23 12:13:53 compute-0 nova_compute[185173]: 2026-01-23 12:13:53.938 185177 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/exception_wrapper.py", line 71, in wrapped
Jan 23 12:13:53 compute-0 nova_compute[185173]: 2026-01-23 12:13:53.938 185177 ERROR oslo_messaging.rpc.server     _emit_versioned_exception_notification(
Jan 23 12:13:53 compute-0 nova_compute[185173]: 2026-01-23 12:13:53.938 185177 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__
Jan 23 12:13:53 compute-0 nova_compute[185173]: 2026-01-23 12:13:53.938 185177 ERROR oslo_messaging.rpc.server     self.force_reraise()
Jan 23 12:13:53 compute-0 nova_compute[185173]: 2026-01-23 12:13:53.938 185177 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise
Jan 23 12:13:53 compute-0 nova_compute[185173]: 2026-01-23 12:13:53.938 185177 ERROR oslo_messaging.rpc.server     raise self.value
Jan 23 12:13:53 compute-0 nova_compute[185173]: 2026-01-23 12:13:53.938 185177 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/exception_wrapper.py", line 63, in wrapped
Jan 23 12:13:53 compute-0 nova_compute[185173]: 2026-01-23 12:13:53.938 185177 ERROR oslo_messaging.rpc.server     return f(self, context, *args, **kw)
Jan 23 12:13:53 compute-0 nova_compute[185173]: 2026-01-23 12:13:53.938 185177 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 214, in decorated_function
Jan 23 12:13:53 compute-0 nova_compute[185173]: 2026-01-23 12:13:53.938 185177 ERROR oslo_messaging.rpc.server     compute_utils.add_instance_fault_from_exc(context,
Jan 23 12:13:53 compute-0 nova_compute[185173]: 2026-01-23 12:13:53.938 185177 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/compute/utils.py", line 153, in add_instance_fault_from_exc
Jan 23 12:13:53 compute-0 nova_compute[185173]: 2026-01-23 12:13:53.938 185177 ERROR oslo_messaging.rpc.server     fault_obj.create()
Jan 23 12:13:53 compute-0 nova_compute[185173]: 2026-01-23 12:13:53.938 185177 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 209, in wrapper
Jan 23 12:13:53 compute-0 nova_compute[185173]: 2026-01-23 12:13:53.938 185177 ERROR oslo_messaging.rpc.server     updates, result = self.indirection_api.object_action(
Jan 23 12:13:53 compute-0 nova_compute[185173]: 2026-01-23 12:13:53.938 185177 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/conductor/rpcapi.py", line 247, in object_action
Jan 23 12:13:53 compute-0 nova_compute[185173]: 2026-01-23 12:13:53.938 185177 ERROR oslo_messaging.rpc.server     return cctxt.call(context, 'object_action', objinst=objinst,
Jan 23 12:13:53 compute-0 nova_compute[185173]: 2026-01-23 12:13:53.938 185177 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/client.py", line 190, in call
Jan 23 12:13:53 compute-0 nova_compute[185173]: 2026-01-23 12:13:53.938 185177 ERROR oslo_messaging.rpc.server     result = self.transport._send(
Jan 23 12:13:53 compute-0 nova_compute[185173]: 2026-01-23 12:13:53.938 185177 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 123, in _send
Jan 23 12:13:53 compute-0 nova_compute[185173]: 2026-01-23 12:13:53.938 185177 ERROR oslo_messaging.rpc.server     return self._driver.send(target, ctxt, message,
Jan 23 12:13:53 compute-0 nova_compute[185173]: 2026-01-23 12:13:53.938 185177 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 689, in send
Jan 23 12:13:53 compute-0 nova_compute[185173]: 2026-01-23 12:13:53.938 185177 ERROR oslo_messaging.rpc.server     return self._send(target, ctxt, message, wait_for_reply, timeout,
Jan 23 12:13:53 compute-0 nova_compute[185173]: 2026-01-23 12:13:53.938 185177 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 681, in _send
Jan 23 12:13:53 compute-0 nova_compute[185173]: 2026-01-23 12:13:53.938 185177 ERROR oslo_messaging.rpc.server     raise result
Jan 23 12:13:53 compute-0 nova_compute[185173]: 2026-01-23 12:13:53.938 185177 ERROR oslo_messaging.rpc.server oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Jan 23 12:13:53 compute-0 nova_compute[185173]: 2026-01-23 12:13:53.938 185177 ERROR oslo_messaging.rpc.server (Background on this error at: https://sqlalche.me/e/14/e3q8)
Jan 23 12:13:53 compute-0 nova_compute[185173]: 2026-01-23 12:13:53.938 185177 ERROR oslo_messaging.rpc.server ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 653, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 226, in wrapper\n    return fn(self, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/instance_fault.py", line 76, in create\n    db_fault = db.instance_fault_create(self._context, values)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 207, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 3823, in instance_fault_create\n    fault_ref.save(context.session)\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/sqlalchemy/models.py", line 38, in save\n    session.flush()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 3444, in flush\n    self._flush(objects)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 3584, in _flush\n    transaction.rollback(_capture_exception=True)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 3544, in _flush\n    flush_context.execute()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/unitofwork.py", line 456, in execute\n    rec.execute(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/unitofwork.py", line 630, in execute\n    util.preloaded.orm_persistence.save_obj(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/persistence.py", line 212, in save_obj\n    for (\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/persistence.py", line 373, in _organize_states_for_save\n    for state, dict_, mapper, connection in _connections_for_states(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/persistence.py", line 1737, in _connections_for_states\n    connection = uowtransaction.transaction.connection(base_mapper)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 626, in connection\n    return self._connection_for_bind(bind, execution_options)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 735, in _connection_for_bind\n    conn = self._parent._connection_for_bind(bind, execution_options)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 653, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n'].
Jan 23 12:13:53 compute-0 nova_compute[185173]: 2026-01-23 12:13:53.938 185177 ERROR oslo_messaging.rpc.server 
Jan 23 12:13:54 compute-0 rsyslogd[235472]: message too long (9544) with configured size 8096, begin of message is: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packag [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 23 12:13:54 compute-0 rsyslogd[235472]: message too long (9607) with configured size 8096, begin of message is: 2026-01-23 12:13:53.938 185177 ERROR oslo_messaging.rpc.server ['Traceback (most [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 23 12:13:58 compute-0 nova_compute[185173]: 2026-01-23 12:13:58.071 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:13:58 compute-0 nova_compute[185173]: 2026-01-23 12:13:58.235 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:13:58 compute-0 nova_compute[185173]: 2026-01-23 12:13:58.236 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 23 12:13:58 compute-0 nova_compute[185173]: 2026-01-23 12:13:58.582 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:13:58 compute-0 nova_compute[185173]: 2026-01-23 12:13:58.810 185177 ERROR oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Error during ComputeManager._run_pending_deletes: oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Jan 23 12:13:58 compute-0 nova_compute[185173]: (Background on this error at: https://sqlalche.me/e/14/e3q8)
Jan 23 12:13:58 compute-0 nova_compute[185173]: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 653, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 184, in wrapper\n    result = fn(cls, context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/instance.py", line 1357, in get_by_filters\n    db_inst_list = cls._get_by_filters_impl(\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 179, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/instance.py", line 1347, in _get_by_filters_impl\n    db_inst_list = db.instance_get_all_by_filters(\n', '  File "/usr/lib/python3.9/site-packages/nova/db/utils.py", line 35, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 241, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 1583, in instance_get_all_by_filters\n    return instance_get_all_by_filters_sort(context, filters, limit=limit,\n', '  File "/usr/lib/python3.9/site-packages/nova/db/utils.py", line 35, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 241, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 1842, in instance_get_all_by_filters_sort\n    instances = query_prefix.all()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2773, in all\n    return self._iter().all()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2916, in _iter\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 653, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n'].
Jan 23 12:13:58 compute-0 nova_compute[185173]: 2026-01-23 12:13:58.810 185177 ERROR oslo_service.periodic_task Traceback (most recent call last):
Jan 23 12:13:58 compute-0 nova_compute[185173]: 2026-01-23 12:13:58.810 185177 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_service/periodic_task.py", line 216, in run_periodic_tasks
Jan 23 12:13:58 compute-0 nova_compute[185173]: 2026-01-23 12:13:58.810 185177 ERROR oslo_service.periodic_task     task(self, context)
Jan 23 12:13:58 compute-0 nova_compute[185173]: 2026-01-23 12:13:58.810 185177 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 11152, in _run_pending_deletes
Jan 23 12:13:58 compute-0 nova_compute[185173]: 2026-01-23 12:13:58.810 185177 ERROR oslo_service.periodic_task     instances = objects.InstanceList.get_by_filters(
Jan 23 12:13:58 compute-0 nova_compute[185173]: 2026-01-23 12:13:58.810 185177 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 175, in wrapper
Jan 23 12:13:58 compute-0 nova_compute[185173]: 2026-01-23 12:13:58.810 185177 ERROR oslo_service.periodic_task     result = cls.indirection_api.object_class_action_versions(
Jan 23 12:13:58 compute-0 nova_compute[185173]: 2026-01-23 12:13:58.810 185177 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/nova/conductor/rpcapi.py", line 240, in object_class_action_versions
Jan 23 12:13:58 compute-0 nova_compute[185173]: 2026-01-23 12:13:58.810 185177 ERROR oslo_service.periodic_task     return cctxt.call(context, 'object_class_action_versions',
Jan 23 12:13:58 compute-0 nova_compute[185173]: 2026-01-23 12:13:58.810 185177 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/client.py", line 190, in call
Jan 23 12:13:58 compute-0 nova_compute[185173]: 2026-01-23 12:13:58.810 185177 ERROR oslo_service.periodic_task     result = self.transport._send(
Jan 23 12:13:58 compute-0 nova_compute[185173]: 2026-01-23 12:13:58.810 185177 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 123, in _send
Jan 23 12:13:58 compute-0 nova_compute[185173]: 2026-01-23 12:13:58.810 185177 ERROR oslo_service.periodic_task     return self._driver.send(target, ctxt, message,
Jan 23 12:13:58 compute-0 nova_compute[185173]: 2026-01-23 12:13:58.810 185177 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 689, in send
Jan 23 12:13:58 compute-0 nova_compute[185173]: 2026-01-23 12:13:58.810 185177 ERROR oslo_service.periodic_task     return self._send(target, ctxt, message, wait_for_reply, timeout,
Jan 23 12:13:58 compute-0 nova_compute[185173]: 2026-01-23 12:13:58.810 185177 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 681, in _send
Jan 23 12:13:58 compute-0 nova_compute[185173]: 2026-01-23 12:13:58.810 185177 ERROR oslo_service.periodic_task     raise result
Jan 23 12:13:58 compute-0 nova_compute[185173]: 2026-01-23 12:13:58.810 185177 ERROR oslo_service.periodic_task oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Jan 23 12:13:58 compute-0 nova_compute[185173]: 2026-01-23 12:13:58.810 185177 ERROR oslo_service.periodic_task (Background on this error at: https://sqlalche.me/e/14/e3q8)
Jan 23 12:13:58 compute-0 nova_compute[185173]: 2026-01-23 12:13:58.810 185177 ERROR oslo_service.periodic_task ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 653, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 184, in wrapper\n    result = fn(cls, context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/instance.py", line 1357, in get_by_filters\n    db_inst_list = cls._get_by_filters_impl(\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 179, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/instance.py", line 1347, in _get_by_filters_impl\n    db_inst_list = db.instance_get_all_by_filters(\n', '  File "/usr/lib/python3.9/site-packages/nova/db/utils.py", line 35, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 241, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 1583, in instance_get_all_by_filters\n    return instance_get_all_by_filters_sort(context, filters, limit=limit,\n', '  File "/usr/lib/python3.9/site-packages/nova/db/utils.py", line 35, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 241, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 1842, in instance_get_all_by_filters_sort\n    instances = query_prefix.all()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2773, in all\n    return self._iter().all()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2916, in _iter\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 653, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n'].
Jan 23 12:13:58 compute-0 nova_compute[185173]: 2026-01-23 12:13:58.810 185177 ERROR oslo_service.periodic_task 
Jan 23 12:13:58 compute-0 rsyslogd[235472]: message too long (9083) with configured size 8096, begin of message is: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packag [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 23 12:13:58 compute-0 rsyslogd[235472]: message too long (9147) with configured size 8096, begin of message is: 2026-01-23 12:13:58.810 185177 ERROR oslo_service.periodic_task ['Traceback (mos [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 23 12:13:59 compute-0 nova_compute[185173]: 2026-01-23 12:13:59.661 185177 ERROR nova.servicegroup.drivers.db [-] Unexpected error while reporting service status: oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Jan 23 12:13:59 compute-0 nova_compute[185173]: (Background on this error at: https://sqlalche.me/e/14/e3q8)
Jan 23 12:13:59 compute-0 nova_compute[185173]: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 226, in wrapper\n    return fn(self, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/service.py", line 505, in save\n    db_service = db.service_update(self._context, self.id, updates)\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 154, in wrapper\n    ectxt.value = e.inner_exc\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n    self.force_reraise()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n    raise self.value\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 142, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 207, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 563, in service_update\n    service_ref = service_get(context, service_id)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 224, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 398, in service_get\n    result = query.first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2824, in first\n    return self.limit(1)._iter().first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2916, in _iter\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n'].
Jan 23 12:13:59 compute-0 nova_compute[185173]: 2026-01-23 12:13:59.661 185177 ERROR nova.servicegroup.drivers.db Traceback (most recent call last):
Jan 23 12:13:59 compute-0 nova_compute[185173]: 2026-01-23 12:13:59.661 185177 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py", line 92, in _report_state
Jan 23 12:13:59 compute-0 nova_compute[185173]: 2026-01-23 12:13:59.661 185177 ERROR nova.servicegroup.drivers.db     service.service_ref.save()
Jan 23 12:13:59 compute-0 nova_compute[185173]: 2026-01-23 12:13:59.661 185177 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 209, in wrapper
Jan 23 12:13:59 compute-0 nova_compute[185173]: 2026-01-23 12:13:59.661 185177 ERROR nova.servicegroup.drivers.db     updates, result = self.indirection_api.object_action(
Jan 23 12:13:59 compute-0 nova_compute[185173]: 2026-01-23 12:13:59.661 185177 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/nova/conductor/rpcapi.py", line 247, in object_action
Jan 23 12:13:59 compute-0 nova_compute[185173]: 2026-01-23 12:13:59.661 185177 ERROR nova.servicegroup.drivers.db     return cctxt.call(context, 'object_action', objinst=objinst,
Jan 23 12:13:59 compute-0 nova_compute[185173]: 2026-01-23 12:13:59.661 185177 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/client.py", line 190, in call
Jan 23 12:13:59 compute-0 nova_compute[185173]: 2026-01-23 12:13:59.661 185177 ERROR nova.servicegroup.drivers.db     result = self.transport._send(
Jan 23 12:13:59 compute-0 nova_compute[185173]: 2026-01-23 12:13:59.661 185177 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 123, in _send
Jan 23 12:13:59 compute-0 nova_compute[185173]: 2026-01-23 12:13:59.661 185177 ERROR nova.servicegroup.drivers.db     return self._driver.send(target, ctxt, message,
Jan 23 12:13:59 compute-0 nova_compute[185173]: 2026-01-23 12:13:59.661 185177 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 689, in send
Jan 23 12:13:59 compute-0 nova_compute[185173]: 2026-01-23 12:13:59.661 185177 ERROR nova.servicegroup.drivers.db     return self._send(target, ctxt, message, wait_for_reply, timeout,
Jan 23 12:13:59 compute-0 nova_compute[185173]: 2026-01-23 12:13:59.661 185177 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 681, in _send
Jan 23 12:13:59 compute-0 nova_compute[185173]: 2026-01-23 12:13:59.661 185177 ERROR nova.servicegroup.drivers.db     raise result
Jan 23 12:13:59 compute-0 nova_compute[185173]: 2026-01-23 12:13:59.661 185177 ERROR nova.servicegroup.drivers.db oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Jan 23 12:13:59 compute-0 nova_compute[185173]: 2026-01-23 12:13:59.661 185177 ERROR nova.servicegroup.drivers.db (Background on this error at: https://sqlalche.me/e/14/e3q8)
Jan 23 12:13:59 compute-0 nova_compute[185173]: 2026-01-23 12:13:59.661 185177 ERROR nova.servicegroup.drivers.db ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 226, in wrapper\n    return fn(self, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/service.py", line 505, in save\n    db_service = db.service_update(self._context, self.id, updates)\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 154, in wrapper\n    ectxt.value = e.inner_exc\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n    self.force_reraise()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n    raise self.value\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 142, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 207, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 563, in service_update\n    service_ref = service_get(context, service_id)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 224, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 398, in service_get\n    result = query.first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2824, in first\n    return self.limit(1)._iter().first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2916, in _iter\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n'].
Jan 23 12:13:59 compute-0 nova_compute[185173]: 2026-01-23 12:13:59.661 185177 ERROR nova.servicegroup.drivers.db 
Jan 23 12:14:00 compute-0 rsyslogd[235472]: message too long (8986) with configured size 8096, begin of message is: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packag [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 23 12:14:00 compute-0 rsyslogd[235472]: message too long (9052) with configured size 8096, begin of message is: 2026-01-23 12:13:59.661 185177 ERROR nova.servicegroup.drivers.db ['Traceback (m [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 23 12:14:00 compute-0 podman[249898]: 2026-01-23 12:14:00.748044188 +0000 UTC m=+0.062668051 container health_status d96827cd9c29e53bbdf4cef10942608e4ba405294733072b4aa624c0238e2ed8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 23 12:14:00 compute-0 podman[249896]: 2026-01-23 12:14:00.7552965 +0000 UTC m=+0.077872133 container health_status 48bfd3e93cfb033a8917f154ab637a84f3f60f7609564292c230ce848bae7693 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 23 12:14:00 compute-0 podman[249897]: 2026-01-23 12:14:00.798778309 +0000 UTC m=+0.110641363 container health_status 6ec039018dddd109dd56b3f3912ce4a80c166b5fb98c417c5e3cfbbdfbfbeaad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, org.label-schema.build-date=20260120)
Jan 23 12:14:03 compute-0 nova_compute[185173]: 2026-01-23 12:14:03.074 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:14:03 compute-0 nova_compute[185173]: 2026-01-23 12:14:03.584 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:14:07 compute-0 podman[249954]: 2026-01-23 12:14:07.759820576 +0000 UTC m=+0.093149455 container health_status 1cc877fed4914980324cf4c0d6ba23743fd113442cee4d49cc1a59e402757170 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251202)
Jan 23 12:14:08 compute-0 nova_compute[185173]: 2026-01-23 12:14:08.076 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:14:08 compute-0 nova_compute[185173]: 2026-01-23 12:14:08.587 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:14:11 compute-0 nova_compute[185173]: 2026-01-23 12:14:11.247 185177 ERROR nova.servicegroup.drivers.db [-] Unexpected error while reporting service status: oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Jan 23 12:14:11 compute-0 nova_compute[185173]: (Background on this error at: https://sqlalche.me/e/14/e3q8)
Jan 23 12:14:11 compute-0 nova_compute[185173]: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 226, in wrapper\n    return fn(self, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/service.py", line 505, in save\n    db_service = db.service_update(self._context, self.id, updates)\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 154, in wrapper\n    ectxt.value = e.inner_exc\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n    self.force_reraise()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n    raise self.value\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 142, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 207, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 563, in service_update\n    service_ref = service_get(context, service_id)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 224, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 398, in service_get\n    result = query.first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2824, in first\n    return self.limit(1)._iter().first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2916, in _iter\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n'].
Jan 23 12:14:11 compute-0 nova_compute[185173]: 2026-01-23 12:14:11.247 185177 ERROR nova.servicegroup.drivers.db Traceback (most recent call last):
Jan 23 12:14:11 compute-0 nova_compute[185173]: 2026-01-23 12:14:11.247 185177 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py", line 92, in _report_state
Jan 23 12:14:11 compute-0 nova_compute[185173]: 2026-01-23 12:14:11.247 185177 ERROR nova.servicegroup.drivers.db     service.service_ref.save()
Jan 23 12:14:11 compute-0 nova_compute[185173]: 2026-01-23 12:14:11.247 185177 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 209, in wrapper
Jan 23 12:14:11 compute-0 nova_compute[185173]: 2026-01-23 12:14:11.247 185177 ERROR nova.servicegroup.drivers.db     updates, result = self.indirection_api.object_action(
Jan 23 12:14:11 compute-0 nova_compute[185173]: 2026-01-23 12:14:11.247 185177 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/nova/conductor/rpcapi.py", line 247, in object_action
Jan 23 12:14:11 compute-0 nova_compute[185173]: 2026-01-23 12:14:11.247 185177 ERROR nova.servicegroup.drivers.db     return cctxt.call(context, 'object_action', objinst=objinst,
Jan 23 12:14:11 compute-0 nova_compute[185173]: 2026-01-23 12:14:11.247 185177 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/client.py", line 190, in call
Jan 23 12:14:11 compute-0 nova_compute[185173]: 2026-01-23 12:14:11.247 185177 ERROR nova.servicegroup.drivers.db     result = self.transport._send(
Jan 23 12:14:11 compute-0 nova_compute[185173]: 2026-01-23 12:14:11.247 185177 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 123, in _send
Jan 23 12:14:11 compute-0 nova_compute[185173]: 2026-01-23 12:14:11.247 185177 ERROR nova.servicegroup.drivers.db     return self._driver.send(target, ctxt, message,
Jan 23 12:14:11 compute-0 nova_compute[185173]: 2026-01-23 12:14:11.247 185177 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 689, in send
Jan 23 12:14:11 compute-0 nova_compute[185173]: 2026-01-23 12:14:11.247 185177 ERROR nova.servicegroup.drivers.db     return self._send(target, ctxt, message, wait_for_reply, timeout,
Jan 23 12:14:11 compute-0 nova_compute[185173]: 2026-01-23 12:14:11.247 185177 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 681, in _send
Jan 23 12:14:11 compute-0 nova_compute[185173]: 2026-01-23 12:14:11.247 185177 ERROR nova.servicegroup.drivers.db     raise result
Jan 23 12:14:11 compute-0 nova_compute[185173]: 2026-01-23 12:14:11.247 185177 ERROR nova.servicegroup.drivers.db oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Jan 23 12:14:11 compute-0 nova_compute[185173]: 2026-01-23 12:14:11.247 185177 ERROR nova.servicegroup.drivers.db (Background on this error at: https://sqlalche.me/e/14/e3q8)
Jan 23 12:14:11 compute-0 nova_compute[185173]: 2026-01-23 12:14:11.247 185177 ERROR nova.servicegroup.drivers.db ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 226, in wrapper\n    return fn(self, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/service.py", line 505, in save\n    db_service = db.service_update(self._context, self.id, updates)\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 154, in wrapper\n    ectxt.value = e.inner_exc\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n    self.force_reraise()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n    raise self.value\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 142, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 207, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 563, in service_update\n    service_ref = service_get(context, service_id)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 224, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 398, in service_get\n    result = query.first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2824, in first\n    return self.limit(1)._iter().first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2916, in _iter\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n'].
Jan 23 12:14:11 compute-0 nova_compute[185173]: 2026-01-23 12:14:11.247 185177 ERROR nova.servicegroup.drivers.db 
Jan 23 12:14:11 compute-0 rsyslogd[235472]: message too long (8986) with configured size 8096, begin of message is: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packag [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 23 12:14:11 compute-0 rsyslogd[235472]: message too long (9052) with configured size 8096, begin of message is: 2026-01-23 12:14:11.247 185177 ERROR nova.servicegroup.drivers.db ['Traceback (m [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 23 12:14:11 compute-0 podman[249980]: 2026-01-23 12:14:11.75230533 +0000 UTC m=+0.080333754 container health_status adf529ba1b6aae11f18bcfacdd7f5850af0b6e6af2250d4a705be9c346f3f5af (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS)
Jan 23 12:14:11 compute-0 podman[249979]: 2026-01-23 12:14:11.793604655 +0000 UTC m=+0.113838013 container health_status 900ef841977ab427bb05b895d10e0cac749b9185cccc7bb7aaf2b3886aa6449a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, release=1214.1726694543, vendor=Red Hat, Inc., vcs-type=git, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.tags=base rhel9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, com.redhat.component=ubi9-container, config_id=kepler, managed_by=edpm_ansible, io.openshift.expose-services=, architecture=x86_64, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0)
Jan 23 12:14:13 compute-0 nova_compute[185173]: 2026-01-23 12:14:13.079 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:14:13 compute-0 nova_compute[185173]: 2026-01-23 12:14:13.590 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:14:16 compute-0 podman[250033]: 2026-01-23 12:14:16.727154241 +0000 UTC m=+0.062783636 container health_status 99ee297e6e25b500e7af118e58bbafc761d2fd7202cdfcf4c976c2a99866b5ef (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 23 12:14:18 compute-0 nova_compute[185173]: 2026-01-23 12:14:18.082 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:14:18 compute-0 nova_compute[185173]: 2026-01-23 12:14:18.594 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:14:19 compute-0 ovn_controller[97581]: 2026-01-23T12:14:19Z|00086|memory_trim|INFO|Detected inactivity (last active 30012 ms ago): trimming memory
Jan 23 12:14:20 compute-0 nova_compute[185173]: 2026-01-23 12:14:20.249 185177 ERROR nova.servicegroup.drivers.db [-] Unexpected error while reporting service status: oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Jan 23 12:14:20 compute-0 nova_compute[185173]: (Background on this error at: https://sqlalche.me/e/14/e3q8)
Jan 23 12:14:20 compute-0 nova_compute[185173]: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 226, in wrapper\n    return fn(self, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/service.py", line 505, in save\n    db_service = db.service_update(self._context, self.id, updates)\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 154, in wrapper\n    ectxt.value = e.inner_exc\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n    self.force_reraise()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n    raise self.value\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 142, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 207, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 563, in service_update\n    service_ref = service_get(context, service_id)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 224, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 398, in service_get\n    result = query.first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2824, in first\n    return self.limit(1)._iter().first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2916, in _iter\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n'].
Jan 23 12:14:20 compute-0 nova_compute[185173]: 2026-01-23 12:14:20.249 185177 ERROR nova.servicegroup.drivers.db Traceback (most recent call last):
Jan 23 12:14:20 compute-0 nova_compute[185173]: 2026-01-23 12:14:20.249 185177 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py", line 92, in _report_state
Jan 23 12:14:20 compute-0 nova_compute[185173]: 2026-01-23 12:14:20.249 185177 ERROR nova.servicegroup.drivers.db     service.service_ref.save()
Jan 23 12:14:20 compute-0 nova_compute[185173]: 2026-01-23 12:14:20.249 185177 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 209, in wrapper
Jan 23 12:14:20 compute-0 nova_compute[185173]: 2026-01-23 12:14:20.249 185177 ERROR nova.servicegroup.drivers.db     updates, result = self.indirection_api.object_action(
Jan 23 12:14:20 compute-0 nova_compute[185173]: 2026-01-23 12:14:20.249 185177 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/nova/conductor/rpcapi.py", line 247, in object_action
Jan 23 12:14:20 compute-0 nova_compute[185173]: 2026-01-23 12:14:20.249 185177 ERROR nova.servicegroup.drivers.db     return cctxt.call(context, 'object_action', objinst=objinst,
Jan 23 12:14:20 compute-0 nova_compute[185173]: 2026-01-23 12:14:20.249 185177 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/client.py", line 190, in call
Jan 23 12:14:20 compute-0 nova_compute[185173]: 2026-01-23 12:14:20.249 185177 ERROR nova.servicegroup.drivers.db     result = self.transport._send(
Jan 23 12:14:20 compute-0 nova_compute[185173]: 2026-01-23 12:14:20.249 185177 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 123, in _send
Jan 23 12:14:20 compute-0 nova_compute[185173]: 2026-01-23 12:14:20.249 185177 ERROR nova.servicegroup.drivers.db     return self._driver.send(target, ctxt, message,
Jan 23 12:14:20 compute-0 nova_compute[185173]: 2026-01-23 12:14:20.249 185177 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 689, in send
Jan 23 12:14:20 compute-0 nova_compute[185173]: 2026-01-23 12:14:20.249 185177 ERROR nova.servicegroup.drivers.db     return self._send(target, ctxt, message, wait_for_reply, timeout,
Jan 23 12:14:20 compute-0 nova_compute[185173]: 2026-01-23 12:14:20.249 185177 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 681, in _send
Jan 23 12:14:20 compute-0 nova_compute[185173]: 2026-01-23 12:14:20.249 185177 ERROR nova.servicegroup.drivers.db     raise result
Jan 23 12:14:20 compute-0 nova_compute[185173]: 2026-01-23 12:14:20.249 185177 ERROR nova.servicegroup.drivers.db oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Jan 23 12:14:20 compute-0 nova_compute[185173]: 2026-01-23 12:14:20.249 185177 ERROR nova.servicegroup.drivers.db (Background on this error at: https://sqlalche.me/e/14/e3q8)
Jan 23 12:14:20 compute-0 nova_compute[185173]: 2026-01-23 12:14:20.249 185177 ERROR nova.servicegroup.drivers.db ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 226, in wrapper\n    return fn(self, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/service.py", line 505, in save\n    db_service = db.service_update(self._context, self.id, updates)\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 154, in wrapper\n    ectxt.value = e.inner_exc\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n    self.force_reraise()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n    raise self.value\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 142, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 207, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 563, in service_update\n    service_ref = service_get(context, service_id)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 224, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 398, in service_get\n    result = query.first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2824, in first\n    return self.limit(1)._iter().first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2916, in _iter\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n'].
Jan 23 12:14:20 compute-0 nova_compute[185173]: 2026-01-23 12:14:20.249 185177 ERROR nova.servicegroup.drivers.db 
Jan 23 12:14:20 compute-0 rsyslogd[235472]: message too long (8986) with configured size 8096, begin of message is: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packag [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 23 12:14:20 compute-0 rsyslogd[235472]: message too long (9052) with configured size 8096, begin of message is: 2026-01-23 12:14:20.249 185177 ERROR nova.servicegroup.drivers.db ['Traceback (m [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 23 12:14:23 compute-0 nova_compute[185173]: 2026-01-23 12:14:23.084 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:14:23 compute-0 nova_compute[185173]: 2026-01-23 12:14:23.597 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:14:24 compute-0 podman[250057]: 2026-01-23 12:14:24.799140186 +0000 UTC m=+0.122946178 container health_status cde20f10ae383cce1365a41265bac0a75ea71c31a21a1539f187bef9d678e8d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, vcs-type=git, architecture=x86_64, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=openstack_network_exporter, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal)
Jan 23 12:14:24 compute-0 nova_compute[185173]: 2026-01-23 12:14:24.811 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:14:26 compute-0 nova_compute[185173]: 2026-01-23 12:14:26.230 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:14:27 compute-0 sshd-session[250080]: Invalid user sol from 45.148.10.240 port 39372
Jan 23 12:14:27 compute-0 sshd-session[250080]: Connection closed by invalid user sol 45.148.10.240 port 39372 [preauth]
Jan 23 12:14:28 compute-0 nova_compute[185173]: 2026-01-23 12:14:28.087 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:14:28 compute-0 nova_compute[185173]: 2026-01-23 12:14:28.600 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:14:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:14:29.131 106832 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 12:14:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:14:29.132 106832 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 12:14:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:14:29.133 106832 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 12:14:29 compute-0 podman[201022]: time="2026-01-23T12:14:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 23 12:14:29 compute-0 podman[201022]: @ - - [23/Jan/2026:12:14:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28508 "" "Go-http-client/1.1"
Jan 23 12:14:29 compute-0 podman[201022]: @ - - [23/Jan/2026:12:14:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4392 "" "Go-http-client/1.1"
Jan 23 12:14:31 compute-0 nova_compute[185173]: 2026-01-23 12:14:31.040 185177 ERROR nova.servicegroup.drivers.db [-] Unexpected error while reporting service status: oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Jan 23 12:14:31 compute-0 nova_compute[185173]: (Background on this error at: https://sqlalche.me/e/14/e3q8)
Jan 23 12:14:31 compute-0 nova_compute[185173]: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 226, in wrapper\n    return fn(self, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/service.py", line 505, in save\n    db_service = db.service_update(self._context, self.id, updates)\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 154, in wrapper\n    ectxt.value = e.inner_exc\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n    self.force_reraise()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n    raise self.value\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 142, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 207, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 563, in service_update\n    service_ref = service_get(context, service_id)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 224, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 398, in service_get\n    result = query.first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2824, in first\n    return self.limit(1)._iter().first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2916, in _iter\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n'].
Jan 23 12:14:31 compute-0 nova_compute[185173]: 2026-01-23 12:14:31.040 185177 ERROR nova.servicegroup.drivers.db Traceback (most recent call last):
Jan 23 12:14:31 compute-0 nova_compute[185173]: 2026-01-23 12:14:31.040 185177 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py", line 92, in _report_state
Jan 23 12:14:31 compute-0 nova_compute[185173]: 2026-01-23 12:14:31.040 185177 ERROR nova.servicegroup.drivers.db     service.service_ref.save()
Jan 23 12:14:31 compute-0 nova_compute[185173]: 2026-01-23 12:14:31.040 185177 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 209, in wrapper
Jan 23 12:14:31 compute-0 nova_compute[185173]: 2026-01-23 12:14:31.040 185177 ERROR nova.servicegroup.drivers.db     updates, result = self.indirection_api.object_action(
Jan 23 12:14:31 compute-0 nova_compute[185173]: 2026-01-23 12:14:31.040 185177 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/nova/conductor/rpcapi.py", line 247, in object_action
Jan 23 12:14:31 compute-0 nova_compute[185173]: 2026-01-23 12:14:31.040 185177 ERROR nova.servicegroup.drivers.db     return cctxt.call(context, 'object_action', objinst=objinst,
Jan 23 12:14:31 compute-0 nova_compute[185173]: 2026-01-23 12:14:31.040 185177 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/client.py", line 190, in call
Jan 23 12:14:31 compute-0 nova_compute[185173]: 2026-01-23 12:14:31.040 185177 ERROR nova.servicegroup.drivers.db     result = self.transport._send(
Jan 23 12:14:31 compute-0 nova_compute[185173]: 2026-01-23 12:14:31.040 185177 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 123, in _send
Jan 23 12:14:31 compute-0 rsyslogd[235472]: message too long (8986) with configured size 8096, begin of message is: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packag [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 23 12:14:31 compute-0 nova_compute[185173]: 2026-01-23 12:14:31.040 185177 ERROR nova.servicegroup.drivers.db     return self._driver.send(target, ctxt, message,
Jan 23 12:14:31 compute-0 nova_compute[185173]: 2026-01-23 12:14:31.040 185177 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 689, in send
Jan 23 12:14:31 compute-0 nova_compute[185173]: 2026-01-23 12:14:31.040 185177 ERROR nova.servicegroup.drivers.db     return self._send(target, ctxt, message, wait_for_reply, timeout,
Jan 23 12:14:31 compute-0 nova_compute[185173]: 2026-01-23 12:14:31.040 185177 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 681, in _send
Jan 23 12:14:31 compute-0 nova_compute[185173]: 2026-01-23 12:14:31.040 185177 ERROR nova.servicegroup.drivers.db     raise result
Jan 23 12:14:31 compute-0 nova_compute[185173]: 2026-01-23 12:14:31.040 185177 ERROR nova.servicegroup.drivers.db oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Jan 23 12:14:31 compute-0 nova_compute[185173]: 2026-01-23 12:14:31.040 185177 ERROR nova.servicegroup.drivers.db (Background on this error at: https://sqlalche.me/e/14/e3q8)
Jan 23 12:14:31 compute-0 nova_compute[185173]: 2026-01-23 12:14:31.040 185177 ERROR nova.servicegroup.drivers.db ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 226, in wrapper\n    return fn(self, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/service.py", line 505, in save\n    db_service = db.service_update(self._context, self.id, updates)\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 154, in wrapper\n    ectxt.value = e.inner_exc\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n    self.force_reraise()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n    raise self.value\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 142, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 207, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 563, in service_update\n    service_ref = service_get(context, service_id)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 224, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 398, in service_get\n    result = query.first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2824, in first\n    return self.limit(1)._iter().first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2916, in _iter\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n'].
Jan 23 12:14:31 compute-0 nova_compute[185173]: 2026-01-23 12:14:31.040 185177 ERROR nova.servicegroup.drivers.db 
Jan 23 12:14:31 compute-0 rsyslogd[235472]: message too long (9052) with configured size 8096, begin of message is: 2026-01-23 12:14:31.040 185177 ERROR nova.servicegroup.drivers.db ['Traceback (m [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 23 12:14:31 compute-0 nova_compute[185173]: 2026-01-23 12:14:31.235 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:14:31 compute-0 openstack_network_exporter[204160]: ERROR   12:14:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 23 12:14:31 compute-0 openstack_network_exporter[204160]: 
Jan 23 12:14:31 compute-0 openstack_network_exporter[204160]: ERROR   12:14:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 23 12:14:31 compute-0 openstack_network_exporter[204160]: 
Jan 23 12:14:31 compute-0 nova_compute[185173]: 2026-01-23 12:14:31.702 185177 ERROR oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Error during ComputeManager.update_available_resource: oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Jan 23 12:14:31 compute-0 nova_compute[185173]: (Background on this error at: https://sqlalche.me/e/14/e3q8)
Jan 23 12:14:31 compute-0 nova_compute[185173]: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 184, in wrapper\n    result = fn(cls, context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/compute_node.py", line 485, in get_all_by_host\n    db_computes = cls._db_compute_node_get_all_by_host(context, host,\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 179, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/compute_node.py", line 481, in _db_compute_node_get_all_by_host\n    return db.compute_node_get_all_by_host(context, host)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 241, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 738, in compute_node_get_all_by_host\n    results = _compute_node_fetchall(context, {"host": host})\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 616, in _compute_node_fetchall\n    with engine.connect() as conn, conn.begin():\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n'].
Jan 23 12:14:31 compute-0 nova_compute[185173]: 2026-01-23 12:14:31.702 185177 ERROR oslo_service.periodic_task Traceback (most recent call last):
Jan 23 12:14:31 compute-0 nova_compute[185173]: 2026-01-23 12:14:31.702 185177 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_service/periodic_task.py", line 216, in run_periodic_tasks
Jan 23 12:14:31 compute-0 nova_compute[185173]: 2026-01-23 12:14:31.702 185177 ERROR oslo_service.periodic_task     task(self, context)
Jan 23 12:14:31 compute-0 nova_compute[185173]: 2026-01-23 12:14:31.702 185177 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 10584, in update_available_resource
Jan 23 12:14:31 compute-0 nova_compute[185173]: 2026-01-23 12:14:31.702 185177 ERROR oslo_service.periodic_task     compute_nodes_in_db = self._get_compute_nodes_in_db(context,
Jan 23 12:14:31 compute-0 nova_compute[185173]: 2026-01-23 12:14:31.702 185177 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 10631, in _get_compute_nodes_in_db
Jan 23 12:14:31 compute-0 nova_compute[185173]: 2026-01-23 12:14:31.702 185177 ERROR oslo_service.periodic_task     return objects.ComputeNodeList.get_all_by_host(context, self.host,
Jan 23 12:14:31 compute-0 nova_compute[185173]: 2026-01-23 12:14:31.702 185177 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 175, in wrapper
Jan 23 12:14:31 compute-0 nova_compute[185173]: 2026-01-23 12:14:31.702 185177 ERROR oslo_service.periodic_task     result = cls.indirection_api.object_class_action_versions(
Jan 23 12:14:31 compute-0 nova_compute[185173]: 2026-01-23 12:14:31.702 185177 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/nova/conductor/rpcapi.py", line 240, in object_class_action_versions
Jan 23 12:14:31 compute-0 nova_compute[185173]: 2026-01-23 12:14:31.702 185177 ERROR oslo_service.periodic_task     return cctxt.call(context, 'object_class_action_versions',
Jan 23 12:14:31 compute-0 nova_compute[185173]: 2026-01-23 12:14:31.702 185177 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/client.py", line 190, in call
Jan 23 12:14:31 compute-0 nova_compute[185173]: 2026-01-23 12:14:31.702 185177 ERROR oslo_service.periodic_task     result = self.transport._send(
Jan 23 12:14:31 compute-0 nova_compute[185173]: 2026-01-23 12:14:31.702 185177 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 123, in _send
Jan 23 12:14:31 compute-0 nova_compute[185173]: 2026-01-23 12:14:31.702 185177 ERROR oslo_service.periodic_task     return self._driver.send(target, ctxt, message,
Jan 23 12:14:31 compute-0 nova_compute[185173]: 2026-01-23 12:14:31.702 185177 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 689, in send
Jan 23 12:14:31 compute-0 nova_compute[185173]: 2026-01-23 12:14:31.702 185177 ERROR oslo_service.periodic_task     return self._send(target, ctxt, message, wait_for_reply, timeout,
Jan 23 12:14:31 compute-0 nova_compute[185173]: 2026-01-23 12:14:31.702 185177 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 681, in _send
Jan 23 12:14:31 compute-0 nova_compute[185173]: 2026-01-23 12:14:31.702 185177 ERROR oslo_service.periodic_task     raise result
Jan 23 12:14:31 compute-0 nova_compute[185173]: 2026-01-23 12:14:31.702 185177 ERROR oslo_service.periodic_task oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Jan 23 12:14:31 compute-0 nova_compute[185173]: 2026-01-23 12:14:31.702 185177 ERROR oslo_service.periodic_task (Background on this error at: https://sqlalche.me/e/14/e3q8)
Jan 23 12:14:31 compute-0 nova_compute[185173]: 2026-01-23 12:14:31.702 185177 ERROR oslo_service.periodic_task ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 184, in wrapper\n    result = fn(cls, context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/compute_node.py", line 485, in get_all_by_host\n    db_computes = cls._db_compute_node_get_all_by_host(context, host,\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 179, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/compute_node.py", line 481, in _db_compute_node_get_all_by_host\n    return db.compute_node_get_all_by_host(context, host)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 241, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 738, in compute_node_get_all_by_host\n    results = _compute_node_fetchall(context, {"host": host})\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 616, in _compute_node_fetchall\n    with engine.connect() as conn, conn.begin():\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n'].
Jan 23 12:14:31 compute-0 nova_compute[185173]: 2026-01-23 12:14:31.702 185177 ERROR oslo_service.periodic_task 
Jan 23 12:14:31 compute-0 podman[250084]: 2026-01-23 12:14:31.797719086 +0000 UTC m=+0.096135868 container health_status d96827cd9c29e53bbdf4cef10942608e4ba405294733072b4aa624c0238e2ed8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 23 12:14:31 compute-0 podman[250083]: 2026-01-23 12:14:31.807081696 +0000 UTC m=+0.109546138 container health_status 6ec039018dddd109dd56b3f3912ce4a80c166b5fb98c417c5e3cfbbdfbfbeaad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20260120, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute)
Jan 23 12:14:31 compute-0 podman[250082]: 2026-01-23 12:14:31.812782516 +0000 UTC m=+0.123015779 container health_status 48bfd3e93cfb033a8917f154ab637a84f3f60f7609564292c230ce848bae7693 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Jan 23 12:14:32 compute-0 rsyslogd[235472]: message too long (8132) with configured size 8096, begin of message is: 2026-01-23 12:14:31.702 185177 ERROR oslo_service.periodic_task ['Traceback (mos [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 23 12:14:33 compute-0 nova_compute[185173]: 2026-01-23 12:14:33.091 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:14:33 compute-0 nova_compute[185173]: 2026-01-23 12:14:33.602 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:14:34 compute-0 nova_compute[185173]: 2026-01-23 12:14:34.704 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:14:34 compute-0 nova_compute[185173]: 2026-01-23 12:14:34.704 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:14:36 compute-0 nova_compute[185173]: 2026-01-23 12:14:36.235 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:14:36 compute-0 nova_compute[185173]: 2026-01-23 12:14:36.236 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 23 12:14:36 compute-0 nova_compute[185173]: 2026-01-23 12:14:36.236 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 23 12:14:37 compute-0 nova_compute[185173]: 2026-01-23 12:14:37.900 185177 ERROR oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Error during ComputeManager._heal_instance_info_cache: oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Jan 23 12:14:37 compute-0 nova_compute[185173]: (Background on this error at: https://sqlalche.me/e/14/e3q8)
Jan 23 12:14:37 compute-0 nova_compute[185173]: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 184, in wrapper\n    result = fn(cls, context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/instance.py", line 1378, in get_by_host\n    db_inst_list = cls._db_instance_get_all_by_host(\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 179, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/instance.py", line 1373, in _db_instance_get_all_by_host\n    return db.instance_get_all_by_host(context, host,\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 241, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 2155, in instance_get_all_by_host\n    instances = query.filter_by(host=host).all()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2773, in all\n    return self._iter().all()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2916, in _iter\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n'].
Jan 23 12:14:37 compute-0 nova_compute[185173]: 2026-01-23 12:14:37.900 185177 ERROR oslo_service.periodic_task Traceback (most recent call last):
Jan 23 12:14:37 compute-0 nova_compute[185173]: 2026-01-23 12:14:37.900 185177 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_service/periodic_task.py", line 216, in run_periodic_tasks
Jan 23 12:14:37 compute-0 nova_compute[185173]: 2026-01-23 12:14:37.900 185177 ERROR oslo_service.periodic_task     task(self, context)
Jan 23 12:14:37 compute-0 nova_compute[185173]: 2026-01-23 12:14:37.900 185177 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 9863, in _heal_instance_info_cache
Jan 23 12:14:37 compute-0 nova_compute[185173]: 2026-01-23 12:14:37.900 185177 ERROR oslo_service.periodic_task     db_instances = objects.InstanceList.get_by_host(
Jan 23 12:14:37 compute-0 nova_compute[185173]: 2026-01-23 12:14:37.900 185177 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 175, in wrapper
Jan 23 12:14:37 compute-0 nova_compute[185173]: 2026-01-23 12:14:37.900 185177 ERROR oslo_service.periodic_task     result = cls.indirection_api.object_class_action_versions(
Jan 23 12:14:37 compute-0 nova_compute[185173]: 2026-01-23 12:14:37.900 185177 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/nova/conductor/rpcapi.py", line 240, in object_class_action_versions
Jan 23 12:14:37 compute-0 nova_compute[185173]: 2026-01-23 12:14:37.900 185177 ERROR oslo_service.periodic_task     return cctxt.call(context, 'object_class_action_versions',
Jan 23 12:14:37 compute-0 nova_compute[185173]: 2026-01-23 12:14:37.900 185177 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/client.py", line 190, in call
Jan 23 12:14:37 compute-0 nova_compute[185173]: 2026-01-23 12:14:37.900 185177 ERROR oslo_service.periodic_task     result = self.transport._send(
Jan 23 12:14:37 compute-0 nova_compute[185173]: 2026-01-23 12:14:37.900 185177 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 123, in _send
Jan 23 12:14:37 compute-0 nova_compute[185173]: 2026-01-23 12:14:37.900 185177 ERROR oslo_service.periodic_task     return self._driver.send(target, ctxt, message,
Jan 23 12:14:37 compute-0 nova_compute[185173]: 2026-01-23 12:14:37.900 185177 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 689, in send
Jan 23 12:14:37 compute-0 nova_compute[185173]: 2026-01-23 12:14:37.900 185177 ERROR oslo_service.periodic_task     return self._send(target, ctxt, message, wait_for_reply, timeout,
Jan 23 12:14:37 compute-0 nova_compute[185173]: 2026-01-23 12:14:37.900 185177 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 681, in _send
Jan 23 12:14:37 compute-0 nova_compute[185173]: 2026-01-23 12:14:37.900 185177 ERROR oslo_service.periodic_task     raise result
Jan 23 12:14:37 compute-0 nova_compute[185173]: 2026-01-23 12:14:37.900 185177 ERROR oslo_service.periodic_task oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Jan 23 12:14:37 compute-0 nova_compute[185173]: 2026-01-23 12:14:37.900 185177 ERROR oslo_service.periodic_task (Background on this error at: https://sqlalche.me/e/14/e3q8)
Jan 23 12:14:37 compute-0 nova_compute[185173]: 2026-01-23 12:14:37.900 185177 ERROR oslo_service.periodic_task ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 184, in wrapper\n    result = fn(cls, context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/instance.py", line 1378, in get_by_host\n    db_inst_list = cls._db_instance_get_all_by_host(\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 179, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/instance.py", line 1373, in _db_instance_get_all_by_host\n    return db.instance_get_all_by_host(context, host,\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 241, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 2155, in instance_get_all_by_host\n    instances = query.filter_by(host=host).all()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2773, in all\n    return self._iter().all()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2916, in _iter\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n'].
Jan 23 12:14:37 compute-0 nova_compute[185173]: 2026-01-23 12:14:37.900 185177 ERROR oslo_service.periodic_task 
Jan 23 12:14:37 compute-0 nova_compute[185173]: 2026-01-23 12:14:37.902 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:14:37 compute-0 nova_compute[185173]: 2026-01-23 12:14:37.902 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:14:37 compute-0 nova_compute[185173]: 2026-01-23 12:14:37.903 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 23 12:14:38 compute-0 nova_compute[185173]: 2026-01-23 12:14:38.094 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:14:38 compute-0 rsyslogd[235472]: message too long (8558) with configured size 8096, begin of message is: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packag [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 23 12:14:38 compute-0 rsyslogd[235472]: message too long (8622) with configured size 8096, begin of message is: 2026-01-23 12:14:37.900 185177 ERROR oslo_service.periodic_task ['Traceback (mos [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 23 12:14:38 compute-0 nova_compute[185173]: 2026-01-23 12:14:38.604 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:14:38 compute-0 podman[250142]: 2026-01-23 12:14:38.767029317 +0000 UTC m=+0.099438449 container health_status 1cc877fed4914980324cf4c0d6ba23743fd113442cee4d49cc1a59e402757170 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 23 12:14:39 compute-0 nova_compute[185173]: 2026-01-23 12:14:39.917 185177 ERROR nova.servicegroup.drivers.db [-] Unexpected error while reporting service status: oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Jan 23 12:14:39 compute-0 nova_compute[185173]: (Background on this error at: https://sqlalche.me/e/14/e3q8)
Jan 23 12:14:39 compute-0 nova_compute[185173]: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 226, in wrapper\n    return fn(self, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/service.py", line 505, in save\n    db_service = db.service_update(self._context, self.id, updates)\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 154, in wrapper\n    ectxt.value = e.inner_exc\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n    self.force_reraise()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n    raise self.value\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 142, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 207, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 563, in service_update\n    service_ref = service_get(context, service_id)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 224, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 398, in service_get\n    result = query.first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2824, in first\n    return self.limit(1)._iter().first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2916, in _iter\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n'].
Jan 23 12:14:39 compute-0 nova_compute[185173]: 2026-01-23 12:14:39.917 185177 ERROR nova.servicegroup.drivers.db Traceback (most recent call last):
Jan 23 12:14:39 compute-0 nova_compute[185173]: 2026-01-23 12:14:39.917 185177 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py", line 92, in _report_state
Jan 23 12:14:39 compute-0 nova_compute[185173]: 2026-01-23 12:14:39.917 185177 ERROR nova.servicegroup.drivers.db     service.service_ref.save()
Jan 23 12:14:39 compute-0 nova_compute[185173]: 2026-01-23 12:14:39.917 185177 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 209, in wrapper
Jan 23 12:14:39 compute-0 nova_compute[185173]: 2026-01-23 12:14:39.917 185177 ERROR nova.servicegroup.drivers.db     updates, result = self.indirection_api.object_action(
Jan 23 12:14:39 compute-0 nova_compute[185173]: 2026-01-23 12:14:39.917 185177 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/nova/conductor/rpcapi.py", line 247, in object_action
Jan 23 12:14:39 compute-0 nova_compute[185173]: 2026-01-23 12:14:39.917 185177 ERROR nova.servicegroup.drivers.db     return cctxt.call(context, 'object_action', objinst=objinst,
Jan 23 12:14:39 compute-0 nova_compute[185173]: 2026-01-23 12:14:39.917 185177 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/client.py", line 190, in call
Jan 23 12:14:39 compute-0 nova_compute[185173]: 2026-01-23 12:14:39.917 185177 ERROR nova.servicegroup.drivers.db     result = self.transport._send(
Jan 23 12:14:39 compute-0 nova_compute[185173]: 2026-01-23 12:14:39.917 185177 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 123, in _send
Jan 23 12:14:39 compute-0 nova_compute[185173]: 2026-01-23 12:14:39.917 185177 ERROR nova.servicegroup.drivers.db     return self._driver.send(target, ctxt, message,
Jan 23 12:14:39 compute-0 nova_compute[185173]: 2026-01-23 12:14:39.917 185177 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 689, in send
Jan 23 12:14:39 compute-0 nova_compute[185173]: 2026-01-23 12:14:39.917 185177 ERROR nova.servicegroup.drivers.db     return self._send(target, ctxt, message, wait_for_reply, timeout,
Jan 23 12:14:39 compute-0 nova_compute[185173]: 2026-01-23 12:14:39.917 185177 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 681, in _send
Jan 23 12:14:39 compute-0 nova_compute[185173]: 2026-01-23 12:14:39.917 185177 ERROR nova.servicegroup.drivers.db     raise result
Jan 23 12:14:39 compute-0 nova_compute[185173]: 2026-01-23 12:14:39.917 185177 ERROR nova.servicegroup.drivers.db oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Jan 23 12:14:39 compute-0 nova_compute[185173]: 2026-01-23 12:14:39.917 185177 ERROR nova.servicegroup.drivers.db (Background on this error at: https://sqlalche.me/e/14/e3q8)
Jan 23 12:14:39 compute-0 nova_compute[185173]: 2026-01-23 12:14:39.917 185177 ERROR nova.servicegroup.drivers.db ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 226, in wrapper\n    return fn(self, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/service.py", line 505, in save\n    db_service = db.service_update(self._context, self.id, updates)\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 154, in wrapper\n    ectxt.value = e.inner_exc\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n    self.force_reraise()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n    raise self.value\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 142, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 207, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 563, in service_update\n    service_ref = service_get(context, service_id)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 224, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 398, in service_get\n    result = query.first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2824, in first\n    return self.limit(1)._iter().first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2916, in _iter\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n'].
Jan 23 12:14:39 compute-0 nova_compute[185173]: 2026-01-23 12:14:39.917 185177 ERROR nova.servicegroup.drivers.db 
Jan 23 12:14:40 compute-0 rsyslogd[235472]: message too long (8986) with configured size 8096, begin of message is: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packag [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 23 12:14:40 compute-0 rsyslogd[235472]: message too long (9052) with configured size 8096, begin of message is: 2026-01-23 12:14:39.917 185177 ERROR nova.servicegroup.drivers.db ['Traceback (m [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 23 12:14:40 compute-0 nova_compute[185173]: 2026-01-23 12:14:40.235 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:14:42 compute-0 podman[250170]: 2026-01-23 12:14:42.768787925 +0000 UTC m=+0.094323823 container health_status adf529ba1b6aae11f18bcfacdd7f5850af0b6e6af2250d4a705be9c346f3f5af (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_ipmi)
Jan 23 12:14:42 compute-0 podman[250169]: 2026-01-23 12:14:42.779299594 +0000 UTC m=+0.099812918 container health_status 900ef841977ab427bb05b895d10e0cac749b9185cccc7bb7aaf2b3886aa6449a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, io.openshift.tags=base rhel9, vcs-type=git, io.openshift.expose-services=, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0, managed_by=edpm_ansible, architecture=x86_64, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., version=9.4, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=kepler)
Jan 23 12:14:43 compute-0 nova_compute[185173]: 2026-01-23 12:14:43.098 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:14:43 compute-0 nova_compute[185173]: 2026-01-23 12:14:43.608 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:14:47 compute-0 podman[250205]: 2026-01-23 12:14:47.718308083 +0000 UTC m=+0.054080392 container health_status 99ee297e6e25b500e7af118e58bbafc761d2fd7202cdfcf4c976c2a99866b5ef (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 23 12:14:47 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Jan 23 12:14:48 compute-0 nova_compute[185173]: 2026-01-23 12:14:48.102 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:14:48 compute-0 nova_compute[185173]: 2026-01-23 12:14:48.610 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:14:49 compute-0 nova_compute[185173]: 2026-01-23 12:14:49.626 185177 ERROR nova.servicegroup.drivers.db [-] Unexpected error while reporting service status: oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Jan 23 12:14:49 compute-0 nova_compute[185173]: (Background on this error at: https://sqlalche.me/e/14/e3q8)
Jan 23 12:14:49 compute-0 nova_compute[185173]: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 226, in wrapper\n    return fn(self, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/service.py", line 505, in save\n    db_service = db.service_update(self._context, self.id, updates)\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 154, in wrapper\n    ectxt.value = e.inner_exc\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n    self.force_reraise()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n    raise self.value\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 142, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 207, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 563, in service_update\n    service_ref = service_get(context, service_id)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 224, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 398, in service_get\n    result = query.first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2824, in first\n    return self.limit(1)._iter().first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2916, in _iter\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n'].
Jan 23 12:14:49 compute-0 nova_compute[185173]: 2026-01-23 12:14:49.626 185177 ERROR nova.servicegroup.drivers.db Traceback (most recent call last):
Jan 23 12:14:49 compute-0 nova_compute[185173]: 2026-01-23 12:14:49.626 185177 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py", line 92, in _report_state
Jan 23 12:14:49 compute-0 nova_compute[185173]: 2026-01-23 12:14:49.626 185177 ERROR nova.servicegroup.drivers.db     service.service_ref.save()
Jan 23 12:14:49 compute-0 nova_compute[185173]: 2026-01-23 12:14:49.626 185177 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 209, in wrapper
Jan 23 12:14:49 compute-0 nova_compute[185173]: 2026-01-23 12:14:49.626 185177 ERROR nova.servicegroup.drivers.db     updates, result = self.indirection_api.object_action(
Jan 23 12:14:49 compute-0 nova_compute[185173]: 2026-01-23 12:14:49.626 185177 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/nova/conductor/rpcapi.py", line 247, in object_action
Jan 23 12:14:49 compute-0 nova_compute[185173]: 2026-01-23 12:14:49.626 185177 ERROR nova.servicegroup.drivers.db     return cctxt.call(context, 'object_action', objinst=objinst,
Jan 23 12:14:49 compute-0 nova_compute[185173]: 2026-01-23 12:14:49.626 185177 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/client.py", line 190, in call
Jan 23 12:14:49 compute-0 nova_compute[185173]: 2026-01-23 12:14:49.626 185177 ERROR nova.servicegroup.drivers.db     result = self.transport._send(
Jan 23 12:14:49 compute-0 nova_compute[185173]: 2026-01-23 12:14:49.626 185177 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 123, in _send
Jan 23 12:14:49 compute-0 nova_compute[185173]: 2026-01-23 12:14:49.626 185177 ERROR nova.servicegroup.drivers.db     return self._driver.send(target, ctxt, message,
Jan 23 12:14:49 compute-0 nova_compute[185173]: 2026-01-23 12:14:49.626 185177 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 689, in send
Jan 23 12:14:49 compute-0 nova_compute[185173]: 2026-01-23 12:14:49.626 185177 ERROR nova.servicegroup.drivers.db     return self._send(target, ctxt, message, wait_for_reply, timeout,
Jan 23 12:14:49 compute-0 nova_compute[185173]: 2026-01-23 12:14:49.626 185177 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 681, in _send
Jan 23 12:14:49 compute-0 nova_compute[185173]: 2026-01-23 12:14:49.626 185177 ERROR nova.servicegroup.drivers.db     raise result
Jan 23 12:14:49 compute-0 nova_compute[185173]: 2026-01-23 12:14:49.626 185177 ERROR nova.servicegroup.drivers.db oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Jan 23 12:14:49 compute-0 nova_compute[185173]: 2026-01-23 12:14:49.626 185177 ERROR nova.servicegroup.drivers.db (Background on this error at: https://sqlalche.me/e/14/e3q8)
Jan 23 12:14:49 compute-0 nova_compute[185173]: 2026-01-23 12:14:49.626 185177 ERROR nova.servicegroup.drivers.db ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 226, in wrapper\n    return fn(self, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/service.py", line 505, in save\n    db_service = db.service_update(self._context, self.id, updates)\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 154, in wrapper\n    ectxt.value = e.inner_exc\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n    self.force_reraise()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n    raise self.value\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 142, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 207, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 563, in service_update\n    service_ref = service_get(context, service_id)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 224, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 398, in service_get\n    result = query.first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2824, in first\n    return self.limit(1)._iter().first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2916, in _iter\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n'].
Jan 23 12:14:49 compute-0 nova_compute[185173]: 2026-01-23 12:14:49.626 185177 ERROR nova.servicegroup.drivers.db 
Jan 23 12:14:49 compute-0 rsyslogd[235472]: message too long (8986) with configured size 8096, begin of message is: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packag [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 23 12:14:49 compute-0 rsyslogd[235472]: message too long (9052) with configured size 8096, begin of message is: 2026-01-23 12:14:49.626 185177 ERROR nova.servicegroup.drivers.db ['Traceback (m [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 23 12:14:50 compute-0 nova_compute[185173]: 2026-01-23 12:14:50.230 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:14:51 compute-0 nova_compute[185173]: 2026-01-23 12:14:51.056 185177 ERROR oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Error during ComputeManager._sync_scheduler_instance_info: oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Jan 23 12:14:51 compute-0 nova_compute[185173]: (Background on this error at: https://sqlalche.me/e/14/e3q8)
Jan 23 12:14:51 compute-0 nova_compute[185173]: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 184, in wrapper\n    result = fn(cls, context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/instance.py", line 1378, in get_by_host\n    db_inst_list = cls._db_instance_get_all_by_host(\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 179, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/instance.py", line 1373, in _db_instance_get_all_by_host\n    return db.instance_get_all_by_host(context, host,\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 241, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 2155, in instance_get_all_by_host\n    instances = query.filter_by(host=host).all()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2773, in all\n    return self._iter().all()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2916, in _iter\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n'].
Jan 23 12:14:51 compute-0 nova_compute[185173]: 2026-01-23 12:14:51.056 185177 ERROR oslo_service.periodic_task Traceback (most recent call last):
Jan 23 12:14:51 compute-0 nova_compute[185173]: 2026-01-23 12:14:51.056 185177 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_service/periodic_task.py", line 216, in run_periodic_tasks
Jan 23 12:14:51 compute-0 nova_compute[185173]: 2026-01-23 12:14:51.056 185177 ERROR oslo_service.periodic_task     task(self, context)
Jan 23 12:14:51 compute-0 nova_compute[185173]: 2026-01-23 12:14:51.056 185177 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 2236, in _sync_scheduler_instance_info
Jan 23 12:14:51 compute-0 nova_compute[185173]: 2026-01-23 12:14:51.056 185177 ERROR oslo_service.periodic_task     instances = objects.InstanceList.get_by_host(context, self.host,
Jan 23 12:14:51 compute-0 nova_compute[185173]: 2026-01-23 12:14:51.056 185177 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 175, in wrapper
Jan 23 12:14:51 compute-0 nova_compute[185173]: 2026-01-23 12:14:51.056 185177 ERROR oslo_service.periodic_task     result = cls.indirection_api.object_class_action_versions(
Jan 23 12:14:51 compute-0 nova_compute[185173]: 2026-01-23 12:14:51.056 185177 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/nova/conductor/rpcapi.py", line 240, in object_class_action_versions
Jan 23 12:14:51 compute-0 nova_compute[185173]: 2026-01-23 12:14:51.056 185177 ERROR oslo_service.periodic_task     return cctxt.call(context, 'object_class_action_versions',
Jan 23 12:14:51 compute-0 nova_compute[185173]: 2026-01-23 12:14:51.056 185177 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/client.py", line 190, in call
Jan 23 12:14:51 compute-0 nova_compute[185173]: 2026-01-23 12:14:51.056 185177 ERROR oslo_service.periodic_task     result = self.transport._send(
Jan 23 12:14:51 compute-0 nova_compute[185173]: 2026-01-23 12:14:51.056 185177 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 123, in _send
Jan 23 12:14:51 compute-0 nova_compute[185173]: 2026-01-23 12:14:51.056 185177 ERROR oslo_service.periodic_task     return self._driver.send(target, ctxt, message,
Jan 23 12:14:51 compute-0 nova_compute[185173]: 2026-01-23 12:14:51.056 185177 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 689, in send
Jan 23 12:14:51 compute-0 nova_compute[185173]: 2026-01-23 12:14:51.056 185177 ERROR oslo_service.periodic_task     return self._send(target, ctxt, message, wait_for_reply, timeout,
Jan 23 12:14:51 compute-0 nova_compute[185173]: 2026-01-23 12:14:51.056 185177 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 681, in _send
Jan 23 12:14:51 compute-0 nova_compute[185173]: 2026-01-23 12:14:51.056 185177 ERROR oslo_service.periodic_task     raise result
Jan 23 12:14:51 compute-0 nova_compute[185173]: 2026-01-23 12:14:51.056 185177 ERROR oslo_service.periodic_task oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Jan 23 12:14:51 compute-0 nova_compute[185173]: 2026-01-23 12:14:51.056 185177 ERROR oslo_service.periodic_task (Background on this error at: https://sqlalche.me/e/14/e3q8)
Jan 23 12:14:51 compute-0 nova_compute[185173]: 2026-01-23 12:14:51.056 185177 ERROR oslo_service.periodic_task ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 184, in wrapper\n    result = fn(cls, context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/instance.py", line 1378, in get_by_host\n    db_inst_list = cls._db_instance_get_all_by_host(\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 179, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/instance.py", line 1373, in _db_instance_get_all_by_host\n    return db.instance_get_all_by_host(context, host,\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 241, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 2155, in instance_get_all_by_host\n    instances = query.filter_by(host=host).all()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2773, in all\n    return self._iter().all()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2916, in _iter\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n'].
Jan 23 12:14:51 compute-0 nova_compute[185173]: 2026-01-23 12:14:51.056 185177 ERROR oslo_service.periodic_task 
Jan 23 12:14:51 compute-0 rsyslogd[235472]: message too long (8558) with configured size 8096, begin of message is: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packag [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 23 12:14:51 compute-0 rsyslogd[235472]: message too long (8622) with configured size 8096, begin of message is: 2026-01-23 12:14:51.056 185177 ERROR oslo_service.periodic_task ['Traceback (mos [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 23 12:14:53 compute-0 nova_compute[185173]: 2026-01-23 12:14:53.106 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:14:53 compute-0 nova_compute[185173]: 2026-01-23 12:14:53.613 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:14:55 compute-0 podman[250231]: 2026-01-23 12:14:55.769981108 +0000 UTC m=+0.096474486 container health_status cde20f10ae383cce1365a41265bac0a75ea71c31a21a1539f187bef9d678e8d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=openstack_network_exporter, name=ubi9-minimal, version=9.6, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, release=1755695350, distribution-scope=public, io.openshift.expose-services=, vcs-type=git, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 23 12:14:58 compute-0 nova_compute[185173]: 2026-01-23 12:14:58.109 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:14:58 compute-0 nova_compute[185173]: 2026-01-23 12:14:58.616 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:14:59 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:14:59.427 106832 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:21:44', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '86:2e:09:c4:2a:53'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 23 12:14:59 compute-0 nova_compute[185173]: 2026-01-23 12:14:59.427 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:14:59 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:14:59.429 106832 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 23 12:14:59 compute-0 nova_compute[185173]: 2026-01-23 12:14:59.629 185177 INFO nova.servicegroup.drivers.db [-] Recovered from being unable to report status.
Jan 23 12:14:59 compute-0 podman[201022]: time="2026-01-23T12:14:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 23 12:14:59 compute-0 podman[201022]: @ - - [23/Jan/2026:12:14:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28508 "" "Go-http-client/1.1"
Jan 23 12:14:59 compute-0 podman[201022]: @ - - [23/Jan/2026:12:14:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4392 "" "Go-http-client/1.1"
Jan 23 12:15:01 compute-0 openstack_network_exporter[204160]: ERROR   12:15:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 23 12:15:01 compute-0 openstack_network_exporter[204160]: 
Jan 23 12:15:01 compute-0 openstack_network_exporter[204160]: ERROR   12:15:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 23 12:15:01 compute-0 openstack_network_exporter[204160]: 
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.459 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.460 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.460 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283be5d310>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.461 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f28410bc7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.462 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be810>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283be5d310>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.463 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be840>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283be5d310>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.463 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283be5d310>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.464 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be8a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283be5d310>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.465 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc8f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283be5d310>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.466 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be900>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283be5d310>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.466 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bf140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283be5d310>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.467 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7', 'name': 'tempest-AttachInterfacesUnderV243Test-server-1715966339', 'flavor': {'id': 'e853bd28-b25f-4198-9e4c-86f25bfca225', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '701e8d50-6f04-4dc4-b857-9ce72ee86552'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000006', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '219dee4c2af34d05ac6e31aa65c35134', 'user_id': 'e0e1cef9ff584692b12674d39ab8e57c', 'hostId': '01bfad26ed194497ca271cba27fe8e3f7de14872f43ea610f4cc97e4', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.467 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be960>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283be5d310>] with cache [{}], pollster history [{'network.outgoing.bytes.delta': [<NovaLikeServer: tempest-AttachInterfacesUnderV243Test-server-1715966339>]}], and discovery cache [{'local_instances': [<NovaLikeServer: tempest-AttachInterfacesUnderV243Test-server-1715966339>]}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.468 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.469 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bc800>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.469 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2842f61190>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283be5d310>] with cache [{}], pollster history [{'network.outgoing.bytes.delta': [<NovaLikeServer: tempest-AttachInterfacesUnderV243Test-server-1715966339>]}], and discovery cache [{'local_instances': [<NovaLikeServer: tempest-AttachInterfacesUnderV243Test-server-1715966339>]}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.469 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bc800>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.471 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28411c9190>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283be5d310>] with cache [{}], pollster history [{'network.outgoing.bytes.delta': [<NovaLikeServer: tempest-AttachInterfacesUnderV243Test-server-1715966339>]}], and discovery cache [{'local_instances': [<NovaLikeServer: tempest-AttachInterfacesUnderV243Test-server-1715966339>]}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.472 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-01-23T12:15:01.471385) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.471 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.472 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be9c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283be5d310>] with cache [{'inspect_vnics': {}}], pollster history [{'network.outgoing.bytes.delta': [<NovaLikeServer: tempest-AttachInterfacesUnderV243Test-server-1715966339>]}], and discovery cache [{'local_instances': [<NovaLikeServer: tempest-AttachInterfacesUnderV243Test-server-1715966339>]}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.473 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bf1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283be5d310>] with cache [{'inspect_vnics': {}}], pollster history [{'network.outgoing.bytes.delta': [<NovaLikeServer: tempest-AttachInterfacesUnderV243Test-server-1715966339>]}], and discovery cache [{'local_instances': [<NovaLikeServer: tempest-AttachInterfacesUnderV243Test-server-1715966339>]}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.474 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bec00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283be5d310>] with cache [{'inspect_vnics': {}}], pollster history [{'network.outgoing.bytes.delta': [<NovaLikeServer: tempest-AttachInterfacesUnderV243Test-server-1715966339>]}], and discovery cache [{'local_instances': [<NovaLikeServer: tempest-AttachInterfacesUnderV243Test-server-1715966339>]}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.474 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bf440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283be5d310>] with cache [{'inspect_vnics': {}}], pollster history [{'network.outgoing.bytes.delta': [<NovaLikeServer: tempest-AttachInterfacesUnderV243Test-server-1715966339>]}], and discovery cache [{'local_instances': [<NovaLikeServer: tempest-AttachInterfacesUnderV243Test-server-1715966339>]}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.475 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bec60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283be5d310>] with cache [{'inspect_vnics': {}}], pollster history [{'network.outgoing.bytes.delta': [<NovaLikeServer: tempest-AttachInterfacesUnderV243Test-server-1715966339>]}], and discovery cache [{'local_instances': [<NovaLikeServer: tempest-AttachInterfacesUnderV243Test-server-1715966339>]}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.475 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2842f83560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283be5d310>] with cache [{'inspect_vnics': {}}], pollster history [{'network.outgoing.bytes.delta': [<NovaLikeServer: tempest-AttachInterfacesUnderV243Test-server-1715966339>]}], and discovery cache [{'local_instances': [<NovaLikeServer: tempest-AttachInterfacesUnderV243Test-server-1715966339>]}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.476 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283be5d310>] with cache [{'inspect_vnics': {}}], pollster history [{'network.outgoing.bytes.delta': [<NovaLikeServer: tempest-AttachInterfacesUnderV243Test-server-1715966339>]}], and discovery cache [{'local_instances': [<NovaLikeServer: tempest-AttachInterfacesUnderV243Test-server-1715966339>]}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.476 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc5c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283be5d310>] with cache [{'inspect_vnics': {}}], pollster history [{'network.outgoing.bytes.delta': [<NovaLikeServer: tempest-AttachInterfacesUnderV243Test-server-1715966339>]}], and discovery cache [{'local_instances': [<NovaLikeServer: tempest-AttachInterfacesUnderV243Test-server-1715966339>]}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.477 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283be5d310>] with cache [{'inspect_vnics': {}}], pollster history [{'network.outgoing.bytes.delta': [<NovaLikeServer: tempest-AttachInterfacesUnderV243Test-server-1715966339>]}], and discovery cache [{'local_instances': [<NovaLikeServer: tempest-AttachInterfacesUnderV243Test-server-1715966339>]}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.477 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be660>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283be5d310>] with cache [{'inspect_vnics': {}}], pollster history [{'network.outgoing.bytes.delta': [<NovaLikeServer: tempest-AttachInterfacesUnderV243Test-server-1715966339>]}], and discovery cache [{'local_instances': [<NovaLikeServer: tempest-AttachInterfacesUnderV243Test-server-1715966339>]}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.477 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283be5d310>] with cache [{'inspect_vnics': {}}], pollster history [{'network.outgoing.bytes.delta': [<NovaLikeServer: tempest-AttachInterfacesUnderV243Test-server-1715966339>]}], and discovery cache [{'local_instances': [<NovaLikeServer: tempest-AttachInterfacesUnderV243Test-server-1715966339>]}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.478 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283be5d310>] with cache [{'inspect_vnics': {}}], pollster history [{'network.outgoing.bytes.delta': [<NovaLikeServer: tempest-AttachInterfacesUnderV243Test-server-1715966339>]}], and discovery cache [{'local_instances': [<NovaLikeServer: tempest-AttachInterfacesUnderV243Test-server-1715966339>]}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.478 14 DEBUG ceilometer.compute.pollsters [-] 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7/network.outgoing.bytes.delta volume: 3390 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.478 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2842f1af60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283be5d310>] with cache [{'inspect_vnics': {'9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7': (5422.543948951, [InterfaceStats(name='tapd9faf41e-a8', mac='fa:16:3e:61:28:24', fref=None, parameters={'interfaceid': None, 'bridge': None}, rx_bytes=4401, tx_bytes=3390, rx_packets=29, tx_packets=28, rx_drop=0, tx_drop=0, rx_errors=0, tx_errors=0, rx_bytes_delta=4311, tx_bytes_delta=3390)])}}], pollster history [{'network.outgoing.bytes.delta': [<NovaLikeServer: tempest-AttachInterfacesUnderV243Test-server-1715966339>]}], and discovery cache [{'local_instances': [<NovaLikeServer: tempest-AttachInterfacesUnderV243Test-server-1715966339>]}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.480 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410bc770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283be5d310>] with cache [{'inspect_vnics': {'9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7': (5422.543948951, [InterfaceStats(name='tapd9faf41e-a8', mac='fa:16:3e:61:28:24', fref=None, parameters={'interfaceid': None, 'bridge': None}, rx_bytes=4401, tx_bytes=3390, rx_packets=29, tx_packets=28, rx_drop=0, tx_drop=0, rx_errors=0, tx_errors=0, rx_bytes_delta=4311, tx_bytes_delta=3390)])}}], pollster history [{'network.outgoing.bytes.delta': [<NovaLikeServer: tempest-AttachInterfacesUnderV243Test-server-1715966339>]}], and discovery cache [{'local_instances': [<NovaLikeServer: tempest-AttachInterfacesUnderV243Test-server-1715966339>]}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.479 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.481 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f28410be7e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.481 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.481 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410be810>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.481 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410be810>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.480 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f28410be7b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f283be5d310>] with cache [{'inspect_vnics': {'9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7': (5422.543948951, [InterfaceStats(name='tapd9faf41e-a8', mac='fa:16:3e:61:28:24', fref=None, parameters={'interfaceid': None, 'bridge': None}, rx_bytes=4401, tx_bytes=3390, rx_packets=29, tx_packets=28, rx_drop=0, tx_drop=0, rx_errors=0, tx_errors=0, rx_bytes_delta=4311, tx_bytes_delta=3390)])}}], pollster history [{'network.outgoing.bytes.delta': [<NovaLikeServer: tempest-AttachInterfacesUnderV243Test-server-1715966339>], 'disk.device.usage': [<NovaLikeServer: tempest-AttachInterfacesUnderV243Test-server-1715966339>]}], and discovery cache [{'local_instances': [<NovaLikeServer: tempest-AttachInterfacesUnderV243Test-server-1715966339>]}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.482 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.482 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-01-23T12:15:01.482214) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.498 14 DEBUG ceilometer.compute.pollsters [-] 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7/disk.device.usage volume: 29949952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.498 14 DEBUG ceilometer.compute.pollsters [-] 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.499 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.499 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f28411c9b80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.500 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.500 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410be840>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.500 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410be840>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.500 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.501 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-01-23T12:15:01.500864) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.531 14 DEBUG ceilometer.compute.pollsters [-] 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7/disk.device.write.bytes volume: 73084928 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.531 14 DEBUG ceilometer.compute.pollsters [-] 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.532 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.532 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f28410bc830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.532 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.532 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f28410be870>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.533 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.533 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410be8a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.533 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410be8a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.533 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.534 14 DEBUG ceilometer.compute.pollsters [-] 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7/disk.device.write.latency volume: 3782094533 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.534 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-01-23T12:15:01.533857) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.534 14 DEBUG ceilometer.compute.pollsters [-] 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.534 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.535 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f28410bc8c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.535 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.535 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bc8f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.535 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bc8f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.536 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.536 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-01-23T12:15:01.536105) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.536 14 DEBUG ceilometer.compute.pollsters [-] 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.536 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.537 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f28410be8d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.537 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.537 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410be900>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.537 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410be900>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.538 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.538 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-01-23T12:15:01.538054) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.538 14 DEBUG ceilometer.compute.pollsters [-] 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7/disk.device.write.requests volume: 341 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.538 14 DEBUG ceilometer.compute.pollsters [-] 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.539 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.539 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f28410bef30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.539 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.540 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bf140>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.540 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bf140>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.540 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.540 14 DEBUG ceilometer.compute.pollsters [-] 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.540 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-01-23T12:15:01.540630) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.541 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.541 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f28410be930>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.541 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.541 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410be960>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.542 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410be960>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.542 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.542 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-01-23T12:15:01.542403) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.542 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.543 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f28410be750>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.543 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.543 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f2842f61190>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.543 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f2842f61190>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.544 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.544 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-01-23T12:15:01.544096) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.544 14 DEBUG ceilometer.compute.pollsters [-] 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7/disk.device.read.latency volume: 463423503 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.544 14 DEBUG ceilometer.compute.pollsters [-] 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7/disk.device.read.latency volume: 55798306 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.545 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.545 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f28411a4c50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.545 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.545 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28411c9190>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.546 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28411c9190>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.546 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.546 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-01-23T12:15:01.546449) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.546 14 DEBUG ceilometer.compute.pollsters [-] 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7/disk.device.allocation volume: 30154752 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.547 14 DEBUG ceilometer.compute.pollsters [-] 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.547 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.548 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f28410be990>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.548 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.548 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410be9c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.548 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410be9c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.548 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.549 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-01-23T12:15:01.548912) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.549 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.549 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f28410bf1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.549 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.550 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bf1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.550 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bf1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.550 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.550 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-01-23T12:15:01.550619) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.550 14 DEBUG ceilometer.compute.pollsters [-] 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.551 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.551 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f28410bebd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.551 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.552 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bec00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.552 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bec00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.552 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.552 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-01-23T12:15:01.552492) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.570 14 DEBUG ceilometer.compute.pollsters [-] 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7/memory.usage volume: 42.43359375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.571 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.571 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f28410bf410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.572 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.572 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bf440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.572 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bf440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.572 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.573 14 DEBUG ceilometer.compute.pollsters [-] 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.573 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-01-23T12:15:01.572835) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.573 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.573 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f28410bec30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.573 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.574 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bec60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.574 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bec60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.574 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.574 14 DEBUG ceilometer.compute.pollsters [-] 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7/network.incoming.bytes volume: 4401 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.574 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.575 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-01-23T12:15:01.574243) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.575 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f28410bcfb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.575 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.575 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f2842f83560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.575 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f2842f83560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.575 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.575 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-01-23T12:15:01.575594) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.575 14 DEBUG ceilometer.compute.pollsters [-] 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7/cpu volume: 31540000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.576 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.576 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f28410bc920>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.576 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.576 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f28410bc5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.576 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.576 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bc5c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.576 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bc5c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.577 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.577 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-01-23T12:15:01.577031) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.577 14 DEBUG ceilometer.compute.pollsters [-] 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7/network.incoming.packets volume: 29 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.577 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.577 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f28410bc890>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.577 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.577 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bc650>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.578 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bc650>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.578 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.578 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-01-23T12:15:01.578130) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.578 14 DEBUG ceilometer.compute.pollsters [-] 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.578 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.579 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f28410be720>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.579 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.579 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410be660>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.579 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410be660>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.579 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.579 14 DEBUG ceilometer.compute.pollsters [-] 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7/disk.device.read.bytes volume: 30304768 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.580 14 DEBUG ceilometer.compute.pollsters [-] 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.580 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-01-23T12:15:01.579744) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.580 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.580 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f28410bc6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.581 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.581 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bc680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.581 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bc680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.581 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.581 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-01-23T12:15:01.581393) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.581 14 DEBUG ceilometer.compute.pollsters [-] 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7/network.outgoing.packets volume: 28 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.582 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.582 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f28410bec90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.582 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.582 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bc6e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.582 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bc6e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.583 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.583 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-01-23T12:15:01.582791) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.583 14 DEBUG ceilometer.compute.pollsters [-] 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7/network.incoming.bytes.delta volume: 4311 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.583 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.583 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f284322b260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.583 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.584 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f2842f1af60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.584 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f2842f1af60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.584 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.584 14 DEBUG ceilometer.compute.pollsters [-] 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.584 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-01-23T12:15:01.584359) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.584 14 DEBUG ceilometer.compute.pollsters [-] 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.585 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.585 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f28410bc740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.585 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.585 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410bc770>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.585 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410bc770>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.585 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.586 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-01-23T12:15:01.585893) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.586 14 DEBUG ceilometer.compute.pollsters [-] 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7/network.outgoing.bytes volume: 3390 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.586 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.586 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f28410be780>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2841223ec0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.586 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.586 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f28410be7b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.587 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f28410be7b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.587 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.587 14 DEBUG ceilometer.compute.pollsters [-] 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7/disk.device.read.requests volume: 1090 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.587 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-01-23T12:15:01.587151) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.587 14 DEBUG ceilometer.compute.pollsters [-] 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.588 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.588 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.589 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.589 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.589 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.589 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.589 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.589 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.589 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.589 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.589 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.589 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.589 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.589 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.589 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.589 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.589 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.590 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.590 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.590 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.590 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.590 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.590 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.590 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.590 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.590 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:15:01 compute-0 ceilometer_agent_compute[194869]: 2026-01-23 12:15:01.590 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 23 12:15:02 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:15:02.431 106832 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9a136bfd-345f-428f-a7f6-d55531120214, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 23 12:15:02 compute-0 podman[250259]: 2026-01-23 12:15:02.76779378 +0000 UTC m=+0.079704283 container health_status d96827cd9c29e53bbdf4cef10942608e4ba405294733072b4aa624c0238e2ed8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 23 12:15:02 compute-0 podman[250253]: 2026-01-23 12:15:02.770057896 +0000 UTC m=+0.097589023 container health_status 48bfd3e93cfb033a8917f154ab637a84f3f60f7609564292c230ce848bae7693 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 23 12:15:02 compute-0 podman[250254]: 2026-01-23 12:15:02.781750034 +0000 UTC m=+0.093875402 container health_status 6ec039018dddd109dd56b3f3912ce4a80c166b5fb98c417c5e3cfbbdfbfbeaad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260120, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Jan 23 12:15:03 compute-0 nova_compute[185173]: 2026-01-23 12:15:03.112 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:15:03 compute-0 nova_compute[185173]: 2026-01-23 12:15:03.620 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:15:08 compute-0 nova_compute[185173]: 2026-01-23 12:15:08.115 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:15:08 compute-0 nova_compute[185173]: 2026-01-23 12:15:08.624 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:15:09 compute-0 podman[250310]: 2026-01-23 12:15:09.774731355 +0000 UTC m=+0.110077471 container health_status 1cc877fed4914980324cf4c0d6ba23743fd113442cee4d49cc1a59e402757170 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.schema-version=1.0)
Jan 23 12:15:13 compute-0 nova_compute[185173]: 2026-01-23 12:15:13.118 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:15:13 compute-0 nova_compute[185173]: 2026-01-23 12:15:13.627 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:15:13 compute-0 podman[250337]: 2026-01-23 12:15:13.72247259 +0000 UTC m=+0.055508528 container health_status adf529ba1b6aae11f18bcfacdd7f5850af0b6e6af2250d4a705be9c346f3f5af (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ceilometer_agent_ipmi)
Jan 23 12:15:13 compute-0 podman[250336]: 2026-01-23 12:15:13.757036761 +0000 UTC m=+0.091991446 container health_status 900ef841977ab427bb05b895d10e0cac749b9185cccc7bb7aaf2b3886aa6449a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1214.1726694543, com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, architecture=x86_64, config_id=kepler, distribution-scope=public, name=ubi9, build-date=2024-09-18T21:23:30, io.openshift.expose-services=, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, io.buildah.version=1.29.0, release-0.7.12=, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler)
Jan 23 12:15:18 compute-0 nova_compute[185173]: 2026-01-23 12:15:18.120 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:15:18 compute-0 nova_compute[185173]: 2026-01-23 12:15:18.629 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:15:18 compute-0 podman[250375]: 2026-01-23 12:15:18.768585665 +0000 UTC m=+0.095585794 container health_status 99ee297e6e25b500e7af118e58bbafc761d2fd7202cdfcf4c976c2a99866b5ef (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 23 12:15:23 compute-0 nova_compute[185173]: 2026-01-23 12:15:23.122 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:15:23 compute-0 nova_compute[185173]: 2026-01-23 12:15:23.633 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:15:25 compute-0 nova_compute[185173]: 2026-01-23 12:15:25.235 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:15:26 compute-0 nova_compute[185173]: 2026-01-23 12:15:26.230 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:15:26 compute-0 podman[250399]: 2026-01-23 12:15:26.756629972 +0000 UTC m=+0.079589640 container health_status cde20f10ae383cce1365a41265bac0a75ea71c31a21a1539f187bef9d678e8d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, config_id=openstack_network_exporter, version=9.6, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, vendor=Red Hat, Inc., io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, release=1755695350, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., architecture=x86_64)
Jan 23 12:15:28 compute-0 nova_compute[185173]: 2026-01-23 12:15:28.125 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:15:28 compute-0 nova_compute[185173]: 2026-01-23 12:15:28.636 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:15:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:15:29.132 106832 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 12:15:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:15:29.133 106832 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 12:15:29 compute-0 ovn_metadata_agent[106827]: 2026-01-23 12:15:29.134 106832 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 12:15:29 compute-0 podman[201022]: time="2026-01-23T12:15:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 23 12:15:29 compute-0 podman[201022]: @ - - [23/Jan/2026:12:15:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28508 "" "Go-http-client/1.1"
Jan 23 12:15:29 compute-0 podman[201022]: @ - - [23/Jan/2026:12:15:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4392 "" "Go-http-client/1.1"
Jan 23 12:15:30 compute-0 ovn_controller[97581]: 2026-01-23T12:15:30Z|00087|memory_trim|INFO|Detected inactivity (last active 30003 ms ago): trimming memory
Jan 23 12:15:31 compute-0 openstack_network_exporter[204160]: ERROR   12:15:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 23 12:15:31 compute-0 openstack_network_exporter[204160]: 
Jan 23 12:15:31 compute-0 openstack_network_exporter[204160]: ERROR   12:15:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 23 12:15:31 compute-0 openstack_network_exporter[204160]: 
Jan 23 12:15:32 compute-0 nova_compute[185173]: 2026-01-23 12:15:32.236 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:15:32 compute-0 nova_compute[185173]: 2026-01-23 12:15:32.278 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 12:15:32 compute-0 nova_compute[185173]: 2026-01-23 12:15:32.279 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 12:15:32 compute-0 nova_compute[185173]: 2026-01-23 12:15:32.279 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 12:15:32 compute-0 nova_compute[185173]: 2026-01-23 12:15:32.279 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 23 12:15:32 compute-0 nova_compute[185173]: 2026-01-23 12:15:32.387 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 12:15:32 compute-0 nova_compute[185173]: 2026-01-23 12:15:32.493 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7/disk --force-share --output=json" returned: 0 in 0.106s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 12:15:32 compute-0 nova_compute[185173]: 2026-01-23 12:15:32.495 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 12:15:32 compute-0 nova_compute[185173]: 2026-01-23 12:15:32.561 185177 DEBUG oslo_concurrency.processutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7/disk --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 12:15:32 compute-0 nova_compute[185173]: 2026-01-23 12:15:32.874 185177 WARNING nova.virt.libvirt.driver [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 23 12:15:32 compute-0 nova_compute[185173]: 2026-01-23 12:15:32.876 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5213MB free_disk=72.35220336914062GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 23 12:15:32 compute-0 nova_compute[185173]: 2026-01-23 12:15:32.876 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 12:15:32 compute-0 nova_compute[185173]: 2026-01-23 12:15:32.877 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 12:15:33 compute-0 nova_compute[185173]: 2026-01-23 12:15:33.074 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Instance 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 23 12:15:33 compute-0 nova_compute[185173]: 2026-01-23 12:15:33.075 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 23 12:15:33 compute-0 nova_compute[185173]: 2026-01-23 12:15:33.075 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=79GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 23 12:15:33 compute-0 nova_compute[185173]: 2026-01-23 12:15:33.128 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:15:33 compute-0 nova_compute[185173]: 2026-01-23 12:15:33.639 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:15:33 compute-0 podman[250428]: 2026-01-23 12:15:33.782668596 +0000 UTC m=+0.101879079 container health_status d96827cd9c29e53bbdf4cef10942608e4ba405294733072b4aa624c0238e2ed8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Jan 23 12:15:33 compute-0 podman[250426]: 2026-01-23 12:15:33.78649347 +0000 UTC m=+0.120450506 container health_status 48bfd3e93cfb033a8917f154ab637a84f3f60f7609564292c230ce848bae7693 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 23 12:15:33 compute-0 nova_compute[185173]: 2026-01-23 12:15:33.786 185177 DEBUG nova.compute.provider_tree [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Inventory has not changed in ProviderTree for provider: 77dd020c-2f5c-40b0-b660-8a95a28aabbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 23 12:15:33 compute-0 podman[250427]: 2026-01-23 12:15:33.78691689 +0000 UTC m=+0.119113753 container health_status 6ec039018dddd109dd56b3f3912ce4a80c166b5fb98c417c5e3cfbbdfbfbeaad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20260120, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 23 12:15:33 compute-0 nova_compute[185173]: 2026-01-23 12:15:33.805 185177 DEBUG nova.scheduler.client.report [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Inventory has not changed for provider 77dd020c-2f5c-40b0-b660-8a95a28aabbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 23 12:15:33 compute-0 nova_compute[185173]: 2026-01-23 12:15:33.806 185177 DEBUG nova.compute.resource_tracker [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 23 12:15:33 compute-0 nova_compute[185173]: 2026-01-23 12:15:33.806 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.930s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 12:15:35 compute-0 nova_compute[185173]: 2026-01-23 12:15:35.806 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:15:36 compute-0 nova_compute[185173]: 2026-01-23 12:15:36.235 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:15:36 compute-0 nova_compute[185173]: 2026-01-23 12:15:36.236 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:15:38 compute-0 nova_compute[185173]: 2026-01-23 12:15:38.131 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:15:38 compute-0 nova_compute[185173]: 2026-01-23 12:15:38.234 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:15:38 compute-0 nova_compute[185173]: 2026-01-23 12:15:38.235 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 23 12:15:38 compute-0 nova_compute[185173]: 2026-01-23 12:15:38.235 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 23 12:15:38 compute-0 nova_compute[185173]: 2026-01-23 12:15:38.641 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:15:38 compute-0 nova_compute[185173]: 2026-01-23 12:15:38.933 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Acquiring lock "refresh_cache-9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 23 12:15:38 compute-0 nova_compute[185173]: 2026-01-23 12:15:38.933 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Acquired lock "refresh_cache-9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 23 12:15:38 compute-0 nova_compute[185173]: 2026-01-23 12:15:38.934 185177 DEBUG nova.network.neutron [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] [instance: 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 23 12:15:38 compute-0 nova_compute[185173]: 2026-01-23 12:15:38.934 185177 DEBUG nova.objects.instance [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 23 12:15:40 compute-0 podman[250483]: 2026-01-23 12:15:40.765608552 +0000 UTC m=+0.102450813 container health_status 1cc877fed4914980324cf4c0d6ba23743fd113442cee4d49cc1a59e402757170 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Jan 23 12:15:42 compute-0 nova_compute[185173]: 2026-01-23 12:15:42.726 185177 DEBUG nova.network.neutron [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] [instance: 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7] Updating instance_info_cache with network_info: [{"id": "d9faf41e-a824-421e-81f1-bbae06da88f5", "address": "fa:16:3e:61:28:24", "network": {"id": "4769a004-5d6e-4d38-99cf-f49693959900", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1719223511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "219dee4c2af34d05ac6e31aa65c35134", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9faf41e-a8", "ovs_interfaceid": "d9faf41e-a824-421e-81f1-bbae06da88f5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 23 12:15:42 compute-0 nova_compute[185173]: 2026-01-23 12:15:42.856 185177 DEBUG oslo_concurrency.lockutils [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Releasing lock "refresh_cache-9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 23 12:15:42 compute-0 nova_compute[185173]: 2026-01-23 12:15:42.857 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] [instance: 9c4b591f-8c65-4ec2-b7a5-3004bad3d4a7] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 23 12:15:42 compute-0 nova_compute[185173]: 2026-01-23 12:15:42.858 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:15:42 compute-0 nova_compute[185173]: 2026-01-23 12:15:42.859 185177 DEBUG oslo_service.periodic_task [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 12:15:42 compute-0 nova_compute[185173]: 2026-01-23 12:15:42.859 185177 DEBUG nova.compute.manager [None req-599a0fcf-0203-4033-8dba-73df5101e7e8 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 23 12:15:43 compute-0 nova_compute[185173]: 2026-01-23 12:15:43.133 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:15:43 compute-0 nova_compute[185173]: 2026-01-23 12:15:43.644 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:15:44 compute-0 podman[250508]: 2026-01-23 12:15:44.792757603 +0000 UTC m=+0.119183336 container health_status 900ef841977ab427bb05b895d10e0cac749b9185cccc7bb7aaf2b3886aa6449a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., name=ubi9, io.openshift.expose-services=, release-0.7.12=, vendor=Red Hat, Inc., version=9.4, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, release=1214.1726694543, config_id=kepler, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Jan 23 12:15:44 compute-0 podman[250509]: 2026-01-23 12:15:44.804702017 +0000 UTC m=+0.123761018 container health_status adf529ba1b6aae11f18bcfacdd7f5850af0b6e6af2250d4a705be9c346f3f5af (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Jan 23 12:15:48 compute-0 nova_compute[185173]: 2026-01-23 12:15:48.136 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:15:48 compute-0 nova_compute[185173]: 2026-01-23 12:15:48.646 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:15:49 compute-0 podman[250548]: 2026-01-23 12:15:49.726820673 +0000 UTC m=+0.063710140 container health_status 99ee297e6e25b500e7af118e58bbafc761d2fd7202cdfcf4c976c2a99866b5ef (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 23 12:15:53 compute-0 nova_compute[185173]: 2026-01-23 12:15:53.138 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:15:53 compute-0 nova_compute[185173]: 2026-01-23 12:15:53.648 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:15:57 compute-0 podman[250572]: 2026-01-23 12:15:57.790006321 +0000 UTC m=+0.106508913 container health_status cde20f10ae383cce1365a41265bac0a75ea71c31a21a1539f187bef9d678e8d7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, config_id=openstack_network_exporter, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, release=1755695350, build-date=2025-08-20T13:12:41, distribution-scope=public, vcs-type=git, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers)
Jan 23 12:15:58 compute-0 nova_compute[185173]: 2026-01-23 12:15:58.142 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:15:58 compute-0 nova_compute[185173]: 2026-01-23 12:15:58.651 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:15:59 compute-0 podman[201022]: time="2026-01-23T12:15:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 23 12:15:59 compute-0 podman[201022]: @ - - [23/Jan/2026:12:15:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28508 "" "Go-http-client/1.1"
Jan 23 12:15:59 compute-0 podman[201022]: @ - - [23/Jan/2026:12:15:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4391 "" "Go-http-client/1.1"
Jan 23 12:16:01 compute-0 openstack_network_exporter[204160]: ERROR   12:16:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 23 12:16:01 compute-0 openstack_network_exporter[204160]: 
Jan 23 12:16:01 compute-0 openstack_network_exporter[204160]: ERROR   12:16:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 23 12:16:01 compute-0 openstack_network_exporter[204160]: 
Jan 23 12:16:02 compute-0 sshd-session[250592]: Accepted publickey for zuul from 192.168.122.10 port 36746 ssh2: ECDSA SHA256:AUEDGm/wgPOySUg5KweIs4KJvJDZMkuE7T7y2BxO92Y
Jan 23 12:16:02 compute-0 systemd-logind[798]: New session 32 of user zuul.
Jan 23 12:16:02 compute-0 systemd[1]: Started Session 32 of User zuul.
Jan 23 12:16:02 compute-0 sshd-session[250592]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 23 12:16:02 compute-0 sudo[250596]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp  -p container,openstack_edpm,system,storage,virt'
Jan 23 12:16:02 compute-0 sudo[250596]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 12:16:03 compute-0 nova_compute[185173]: 2026-01-23 12:16:03.143 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:16:03 compute-0 nova_compute[185173]: 2026-01-23 12:16:03.654 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:16:04 compute-0 podman[250630]: 2026-01-23 12:16:04.121941149 +0000 UTC m=+0.071272626 container health_status 48bfd3e93cfb033a8917f154ab637a84f3f60f7609564292c230ce848bae7693 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 23 12:16:04 compute-0 podman[250631]: 2026-01-23 12:16:04.161324198 +0000 UTC m=+0.103508399 container health_status 6ec039018dddd109dd56b3f3912ce4a80c166b5fb98c417c5e3cfbbdfbfbeaad (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=93ecf842527b95c82e14fba92451bd07, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260120, org.label-schema.license=GPLv2)
Jan 23 12:16:04 compute-0 podman[250632]: 2026-01-23 12:16:04.1634352 +0000 UTC m=+0.105446717 container health_status d96827cd9c29e53bbdf4cef10942608e4ba405294733072b4aa624c0238e2ed8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 23 12:16:08 compute-0 nova_compute[185173]: 2026-01-23 12:16:08.146 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:16:08 compute-0 nova_compute[185173]: 2026-01-23 12:16:08.655 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:16:08 compute-0 ovs-vsctl[250837]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Jan 23 12:16:09 compute-0 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 250620 (sos)
Jan 23 12:16:09 compute-0 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Jan 23 12:16:09 compute-0 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Jan 23 12:16:09 compute-0 virtqemud[184842]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Jan 23 12:16:09 compute-0 virtqemud[184842]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Jan 23 12:16:09 compute-0 virtqemud[184842]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Jan 23 12:16:11 compute-0 crontab[251261]: (root) LIST (root)
Jan 23 12:16:11 compute-0 podman[251310]: 2026-01-23 12:16:11.785356326 +0000 UTC m=+0.109501337 container health_status 1cc877fed4914980324cf4c0d6ba23743fd113442cee4d49cc1a59e402757170 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Jan 23 12:16:13 compute-0 nova_compute[185173]: 2026-01-23 12:16:13.149 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:16:13 compute-0 nova_compute[185173]: 2026-01-23 12:16:13.658 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:16:13 compute-0 systemd[1]: Starting Hostname Service...
Jan 23 12:16:13 compute-0 systemd[1]: Started Hostname Service.
Jan 23 12:16:15 compute-0 podman[251504]: 2026-01-23 12:16:15.343151512 +0000 UTC m=+0.087402753 container health_status 900ef841977ab427bb05b895d10e0cac749b9185cccc7bb7aaf2b3886aa6449a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, managed_by=edpm_ansible, container_name=kepler, distribution-scope=public, io.openshift.tags=base rhel9, config_id=kepler, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, version=9.4, com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., release=1214.1726694543, vcs-type=git, architecture=x86_64, io.openshift.expose-services=)
Jan 23 12:16:15 compute-0 podman[251505]: 2026-01-23 12:16:15.344383762 +0000 UTC m=+0.088524720 container health_status adf529ba1b6aae11f18bcfacdd7f5850af0b6e6af2250d4a705be9c346f3f5af (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '4c124c0be06957a768bf28fdad9dddd58e9a7afd4f0ae4db8f7a71ccfb4a1bd0-477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 23 12:16:18 compute-0 nova_compute[185173]: 2026-01-23 12:16:18.153 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:16:18 compute-0 nova_compute[185173]: 2026-01-23 12:16:18.661 185177 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 12:16:20 compute-0 podman[252117]: 2026-01-23 12:16:20.786302481 +0000 UTC m=+0.118304683 container health_status 99ee297e6e25b500e7af118e58bbafc761d2fd7202cdfcf4c976c2a99866b5ef (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '477725f47fdfc6b0b3831914081e1bbb0e9d569398a66f5f00881d3d59ca0de4-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
